digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Restore, Filter, Improve Quality (https://www.digitalfaq.com/forum/video-restore/)
-   -   Identifying artifacts, fixed with video filters? (https://www.digitalfaq.com/forum/video-restore/8048-identifying-artifacts-fixed.html)

bilditup1 06-11-2017 04:24 PM

Identifying artifacts, fixed with video filters?
 
3 Attachment(s)
I recently began capturing VHS again, which has been an on-again/off-again project for the past couple of years. I think my hardware and software chains are alright, following the guides here. But when it comes to processing I'm at a bit of a loss. To begin with I'm having some difficulty with the terminology. I've attached some excerpts from my most recent cap with the artifacts in question.

In the first video, what are the alternating vertical rainbow bands just right of center? It isn't called 'rainbowing' afaik but as a result I don't really know what to do with it, or if anything can be done.
In the second video, is this - the white streaks rolling down - what is popularly referred to as 'dropout noise'? In an earlier thread lordsmurf mentioned a 'complex temporal capture chain' that can deal with this, and I think I saw someone mention a kind of 'averaging method' that potentially could - but the artifacts featured in the before/after screenshot there didn't look precisely like the ones I'm talking about.
Note that these are capped from a frighteningly moldy tape


which I subsequently cleaned up following this guide. (Most of which is OK - these parts were from the beginning and end of the tape. The tape was also stored improperly - wasn't entirely spooled on either side - and the section where it was stopped at is also pretty dubious. The selections here were chosen to avoid personally identifying anybody.) Many of the tapes I'll have to deal with immediately are in similarly bad shape, and if errant sections like this are not salvageable, we'll be OK, even if it isn't ideal. But I'd like to know for sure that there is nothing to be done (my suspicion) before writing it off.

Separately: I've noticed the recommended method here to deal with head-switching noise is cropping and adding borders. But in case one can crop without modifying the aspect ratio (iow as long was one crops 3 vertical for every 2 horizontal lines) and crops are also done at mod 4, what is lost by then resizing back to (in case of NTSC VHS) 720x480? I guess a resize would soften things a bit but is this really so appreciable? I guess I'll do some tests to find out but it would be nice to have input from the gurus here :)

themaster1 06-12-2017 05:04 AM

try this for your rainbow for a starter
Code:

AVISource("rainbow_no-audio_shortened.avi")
assumetff()
#1) Move chroma up because it's misplaced
separatefields()
A=Last
B=A.Greyscale()
Overlay(B,A,X=0,Y=-2,Mode="Chroma")
weave()
######
#2)
converttoyv12(matrix="rec601",interlaced=true)
c=last
 #even fields
# Motion compensation on frames -2...+2
even = c.SeparateFields().SelectEven()
super_even=even.MSuper()
vf2 = super_even.MAnalyse (isb=false, delta=2, overlap=4)
vf1 = super_even.MAnalyse (isb=false, delta=1, overlap=4)
vb1 = super_even.MAnalyse (isb=true,  delta=1, overlap=4)
vb2 = super_even.MAnalyse (isb=true,  delta=2, overlap=4)
cf2 = MCompensate (even, super_even, vf2, thSAD=400)
cf1 = MCompensate (even, super_even, vf1, thSAD=400)
cb1 = MCompensate (even, super_even, vb1, thSAD=400)
cb2 = MCompensate (even, super_even, vb2, thSAD=400)
 # dfttest on motion-compensated clip
Interleave (cf2, cf1, even, cb1, cb2)
RemoveSpotsMC()
SelectEvery (5, 2)
presmooth = last
filtered_even=last
 #odd fields
# Motion compensation on frames -2...+2
odd = c.SeparateFields().SelectOdd()
super_odd=odd.MSuper()
vf2 = super_odd.MAnalyse (isb=false, delta=2, overlap=4)
vf1 = super_odd.MAnalyse (isb=false, delta=1, overlap=4)
vb1 = super_odd.MAnalyse (isb=true,  delta=1, overlap=4)
vb2 = super_odd.MAnalyse (isb=true,  delta=2, overlap=4)
cf2 = MCompensate (odd, super_odd, vf2, thSAD=400)
cf1 = MCompensate (odd, super_odd, vf1, thSAD=400)
cb1 = MCompensate (odd, super_odd, vb1, thSAD=400)
cb2 = MCompensate (odd, super_odd, vb2, thSAD=400)
# dfttest on motion-compensated clip
Interleave (cf2, cf1, odd, cb1, cb2)
RemoveSpotsMC()
SelectEvery (5, 2)
presmooth = last

filtered_odd=last
Interleave(filtered_even, filtered_odd)
weave()

#3) desaturate with CCD  in RGB colorspace
ConverttoRGB32(matrix="rec601",interlaced=true)
separatefields()
#### CCD COLOR DENOISING (desaturation) :
LoadVirtualDubPlugin("C:\Program Files (x86)\virtualdubmod1.5\plugins\Camcorder_Color_Denoise_sse2.vdf", "CCD", 1)
CCD(15,1) #  0-100, default 30
weave()
converttoyv12(matrix="Rec601",interlaced=true)
mergechroma(last)

function RemoveDirtMC(clip,int limit, bool "_grey")
{
  _grey=default(_grey, false)
  limit = default(limit,6)
  i=MSuper(clip,pel=2)
  bvec = MAnalyse(i,isb=false, blksize=8, delta=1, truemotion=true)
  fvec = MAnalyse(i,isb=true, blksize=8, delta=1, truemotion=true)
  backw = MFlow(clip,i,bvec)
  forw  = MFlow(clip,i,fvec)
  clp=interleave(backw,clip,forw)
  clp=clp.RemoveDirt(limit,_grey)
  clp=clp.SelectEvery(3,1)
  return clp
}


sanlyn 06-12-2017 01:34 PM

4 Attachment(s)
Well, some things you can fix, some you can't. We've all had our share.

Quote:

Originally Posted by bilditup1 (Post 49718)
I've noticed the recommended method here to deal with head-switching noise is cropping and adding borders. But in case one can crop without modifying the aspect ratio (iow as long was one crops 3 vertical for every 2 horizontal lines) and crops are also done at mod 4, what is lost by then resizing back to (in case of NTSC VHS) 720x480? I guess a resize would soften things a bit but is this really so appreciable?

Some notes on that. I don't know why people have to resize to fill cropped borders. Modern TV's employ overscan by default (despite what they tell you at BestBuy), so part of the resized image is hidden unless you disable overscan. Resizing by mod-4 is for 4:2:0 colorspaces like interlaced YV12. For YUY2, progressive video, RGB, etc., the rules differ. The rules for Avisynth Crop() are here: http://avisynth.nl/index.php/Crop.

3:2 is the pixel aspect ratio for square-pixel 720x480 NTSC, but not for encoded anamorphic, which is 12:11 encoded pixel aspect ratio for 4:3 NTSC. I don't know why people have to fill frames with small borders. At 4:3 on a wide panel the borders blend in with black anyway. Resizing doesn't seem to be worth it to me, but make up your own mind whether further degradation of something that already looks like garbage matters to you. I note that people don't worry about borders when movies on cable TV or DVD/BluRay don't fill the screen entirely.

The flicker in your rainbow sample (it's not rainbows) is from gross oversaturation of the V channel, with mistracking making it worse. Red exceeds RGB 255, so it looks discolored and even worse when played back in RGB used by all playback devices. The fade-in from black (actually, it's from dark gray at about RGB 28, not black) looks weird with block noise and hard gradients, likely due to using those NLE transition effects with interlaced material when it should have been progressive (that image is a resized color photo, isn't it?) then later would be encoded as interlaced if necessary. Some NLE's do a cleaner job than others with that sort of thing. It's a very soft image to begin with, tough to work with.

Rips and dropouts aren't the only problem. Besides invalid chroma levels there's also vertical jitter (frame hops). Denoisers and anti-dropout filters really hate that sort of thing and don't perform at their best. I assume you've repacked the tape several times to even up feed reel windings for smoother motion into the tape path.

You gave no info on your playback capture chain, so can't advise about that.

The rip cleaner I used in the scripts below is from several forums, including this one. The modded version I used is for video that's either deinterlaced or uses SeparateFields(). In both cases I deinterlaced, then re-interlaced later.

The text of the anti-rip median filter is attached to this post in its progressive version as FixRipsP.avs. You can paste it into the bottom of a script or save it as an .avs file and import it with Avisynth Import() function (http://avisynth.nl/index.php/Import).

The version for interlaced material is attached as FixRipA.avs. Which one to use depends on the video. The progressive version tends to remove too much detail during interlaced motion.

For either version you'll need extra plugins (masktool2, mvTools2, RGtools (the updated version of RemoveGrain), and DePan. If you have the QTGMC deinterlacing package you probably have many of these.

The cleanup I did on "rainbow_no-audio_shortened.avi" took some experimentation to come up with the right sequence and settings for the filters. I used the 16-bit dither package and GradFun3 to try to clean up some of those fade-up bad gradients. The dither package requires its own plugin versions which I loaded from a separate plugins folders in Avisynth. In the scripts posted here you have to modify path statements to match locations in your system.

You could probably load the rainbow clip results into VirtualDub to fix up the amount of red you want, but be careful about saturation levels. Use a tool like ColorMill and the ColorTools 1.4 histogram to check red channel levels. If you oversaturate again the flicker will come back to haunt you.

Code:

Import("D:\Avisynth 2.5\plugins\FixRipsP.avs")
# ============ dither plugins ================
dppath="D:\Avisynth 2.5\plugins\AVS26\dither\"
Import(dppath+"Dither.avs")
Import(dppath+"mt_xxpand_multi.avs")
LoadPlugin(dppath+"avstp.dll")
LoadPlugin(dppath+"dither.dll")
LoadPlugin(dppath+"mvtools2.dll")
LoadPlugin(dppath+"masktools2.dll")

AviSource("E:\forum\faq\bilditup1\rainbow_no-audio_shortened.avi")
ConvertToYV12(interlaced=true)
ColorYUV(off_y=-15)
ColorYUV(cont_v=-100)
AssumeTFF()
QTGMC(preset="super fast",border=true)
Stab()
ChromaShift(C=-2,L=-4)
Crop(10,2,-2,-10).AddBorders(6,6,6,6)
FixRipsP()
Dither_convert_8_to_16 ()
GradFun3(thr=0.6,mask=0,lsb_in=true,lsb=false)
MergeChroma(aWarpSharp(depth=30))
LSFMod()
# ========= re-interlace ==========
SeparateFields().SelectEvery(4,0,3).Weave()
return last

With the "dropout_no-audio.avi" sample, you don't have enough good frames for any version median or averaging routines to get more than one or two good frames. My advice would be to run the FixRipsP routine and try to find as clean a frame as you can, then loop that frame for the number of frames you want in the video. In other words, the only clean version you'll get from this disaster is to remake the clip from a good frame. In this case one of the cleanest frames I found was frame #1, the second frame after running FixRipsP.

Code:

LoadCPlugin("D:\Avisynth 2.5\plugins\yadif.dll")
Import("D:\Avisynth 2.5\plugins\FixRipsP.avs")

AviSource("E:\forum\faq\bilditup1\dropout_no-audio.avi")
ConvertToYV12(interlaced=true)
AssumeTFF()
SeparateFields()
ChromaShift(C=-2)
MergeChroma(aWarpSharp2(depth=30))
FixRipsP()
Weave()
# ======== make a new clip of 136 frames using frame 1 ========
newClip=Last.Loop(136,1,1).Trim(1,137)
return (newClip.Crop(12,0,0,-10).AddBorders(6,4,6,6))

By the way, what's that garbage on the left-hand border of the samples ? ? ?

bilditup1 06-12-2017 05:15 PM

1 Attachment(s)
Quote:

Originally Posted by themaster1 (Post 49723)
try this for your rainbow for a starter

I'd like to respond but really have no clue??? how this works. Thanks, though.


Quote:

Originally Posted by sanlyn (Post 49727)
Well, some things you can fix, some you can't. We've all had our share.

Yup! I'm just trying to figure out where this falls out.

Quote:

Originally Posted by sanlyn (Post 49727)
Some notes on that. I don't know why people have to resize to fill cropped borders. Modern TV's employ overscan by default (despite what they tell you at BestBuy), so part of the resized image is hidden unless you disable overscan. Resizing by mod-4 is for 4:2:0 colorspaces like interlaced YV12. For YUY2, progressive video, RGB, etc., the rules differ. The rules for Avisynth Crop() are here: http://avisynth.nl/index.php/Crop.

3:2 is the pixel aspect ratio for square-pixel 720x480 NTSC, but not for encoded anamorphic, which is 12:11 encoded pixel aspect ratio for 4:3 NTSC. I don't know why people have to fill frames with small borders. At 4:3 on a wide panel the borders blend in with black anyway. Resizing doesn't seem to be worth it to me, but make up your own mind whether further degradation of something that already looks like garbage matters to you. I note that people don't worry about borders when movies on cable TV or DVD/BluRay don't fill the screen entirely.

I understand that modern TVs overscan by default but the video could easily be viewed on a computer, tablet, etc. It seems a waste to hardcode black pixels if it could be avoided. It is also purportedly? results in a less efficient encode, aside from the bits wasted on black pixels.
And yeah, I know that we're dealing with anamorphic encoding here. Regardless I thought that maintaining the pixel aspect ratio when cropping would be important if one is going to then resize back up to another resolution in that aspect ratio, in order to avoid subtly distorting the result. In any case, it was a minor point - like you said, it's not terribly noticeable, so we'll see. Thanks also for setting me straight re mod/4, which according to the chart appears to only be necessary for interlaced YV12:

Attachment 7626

Anyway...

Quote:

Originally Posted by sanlyn (Post 49727)
The flicker in your rainbow sample (it's not rainbows) is from gross oversaturation of the V channel, with mistracking making it worse. Red exceeds RGB 255, so it looks discolored and even worse when played back in RGB used by all playback devices.

Should I have tried to check and attempt to account for this during capture? I used VirtualDub's histogram to make sure the Y values were valid per your guide but I suppose there isn't an easy way to do this for color/it's easier to just use ColorYUV to fix later?

Quote:

Originally Posted by sanlyn (Post 49727)
The fade-in from black (actually, it's from dark gray at about RGB 28, not black) looks weird with block noise and hard gradients, likely due to using those NLE transition effects with interlaced material when it should have been progressive (that image is a resized color photo, isn't it?) then later would be encoded as interlaced if necessary. Some NLE's do a cleaner job than others with that sort of thing. It's a very soft image to begin with, tough to work with.

Did I miss something? Was there 'progressive material' in video production thirty years ago? Were there non-linear editors? (I thought that this was an actual shot of the Verrazano, but the next four minutes or so - which also suffers from this color-flicker stuff - is a photo montage, so I guess it could be?)

Quote:

Originally Posted by sanlyn (Post 49727)
Rips and dropouts aren't the only problem. Besides invalid chroma levels there's also vertical jitter (frame hops). Denoisers and anti-dropout filters really hate that sort of thing and don't perform at their best. I assume you've repacked the tape several times to even up feed reel windings for smoother motion into the tape path.

I did this maybe once after completing the cleaning (the pack IIRC looked OK though). Do you recommend fast-forward/rewinding several times in order to get a good pack? I didn't know that that was the process. Do you think I should cap again?

Quote:

Originally Posted by sanlyn (Post 49727)
You gave no info on your playback capture chain, so can't advise about that.

Panasonic AG-1980 (TGP 'treated'), DataVideo TBC-3000, ATI AIW9000. I also have an Elite Video BVP4+ that is not currently in the chain - I was under the impression (IIRC from something you said a while ago) that this was only to be used as a last resort so it usually stays off to the side. Do you think this video warrants its use? Using Color Tools per your tip below I found that G and B also hit 255.

Quote:

Originally Posted by sanlyn (Post 49727)
You could probably load the rainbow clip results into VirtualDub to fix up the amount of red you want

So, how is that done, using either VDub or AVS? How is that different from simply undoing what ColorYUV() does in your script?

Code:

ChromaShift(C=-2,L=-4)
How does one tell if the chroma is in the wrong place, and by how much?

Code:

SeparateFields().SelectEvery(4,0,3).Weave()
Let me see if I understood what this is doing. From QTGMC a 60 frames per second progressive clip emerged. Here, SeparateFields() splits each frame into two fields (120 fields/sec). SelectEvery() says that of every four of these fields, the 0th and the 3rd fields should be kept. Then Weave() reassembles our interlaced frames. Did I get it right? Is this the preferred way to reinterlace? Is using the superfast preset on QTGMC sufficient to convert to progressive for processing purposes?

Quote:

Originally Posted by sanlyn (Post 49727)
With the "dropout_no-audio.avi" sample, you don't have enough good frames for any version median or averaging routines to get more than one or two good frames. My advice would be to run the FixRipsP routine and try to find as clean a frame as you can, then loop that frame for the number of frames you want in the video. In other words, the only clean version you'll get from this disaster is to remake the clip from a good frame. In this case one of the cleanest frames I found was frame #1, the second frame after running FixRipsP.

Ha, pretty brilliant. I have other dropouts like like this where this is not the case, but I don't think I have good frames there either. Presumably I'm SOL for those?

Code:

LoadCPlugin("D:\Avisynth 2.5\plugins\yadif.dll")
When is yadif used here below? Did you mean to put in a call to it somewhere? You said you're using FixRipsP instead of A so presumably. Is there any reason you opted for Yadif for this?

Code:

SeparateFields()
Why do the fields needs to be separated here? So that ChromaShift works?

Code:

ChromaShift(C=-2)
You didn't shift it vertically this time - I guess I can't count on these shifts being consistent with a single video?

Code:

MergeChroma(aWarpSharp2(depth=30))
What do we gain over just running aWarpSharp2 directly? A more subtle, accurate sharpening?

Code:

Crop(12,0,0,-10)
This is a different crop from the first vid (didn't take off the top and right this time) even though the source was the same - I suppose I'd have to use the crop that works 'best' for the entire video...

Other than all of that: can you confirm what the terminology is to describe these things? Is the first example just 'flicker'? (I always thought that this referred to a sort of strobing light effect?) Is the second example a tracking error or a dropout? What causes the 'rips' that the attached scripts fix? Are they always called 'rips'?

Quote:

Originally Posted by sanlyn (Post 49727)
By the way, what's that garbage on the left-hand border of the samples ? ? ?

Yeah, I thought that looked weird - was hoping you could tell me :)

Thanks for your detailed response, much appreciated!

sanlyn 06-12-2017 07:31 PM

Quote:

Originally Posted by bilditup1 (Post 49730)
I'd like to respond but really have no clue??? how this works. Thanks, though.

The main routine in themnaster1's script is similar to the FixRips routines, just not nearly as aggressive but keeps more detail.

Quote:

Originally Posted by bilditup1 (Post 49730)
I understand that modern TVs overscan by default but the video could easily be viewed on a computer, tablet, etc. It seems a waste to hardcode black pixels if it could be avoided. It is also purportedly? results in a less efficient encode, aside from the bits wasted on black pixels.

A border of black pixels takes up a fraction of the databits as the same area of image data, because the black pixels are a single color value and they all look alike. Image data is far more complex than a black border. You can encode several seconds of pure black starter frames for a mere smidgen compared to several seconds of "real" image data. The "dropout" mp4 has 136 full color frames, but all of them are alike: look at the small encoded file size. I've seen JPG picture files bigger than that.

Videophiles and pros would preserve the original core image content as much as possible, for both archival and geometric reasons. Users differ. Adding black pixels is less trouble and angst than resizing, IMHO. But they're your videos.

Quote:

Originally Posted by bilditup1 (Post 49730)
Quote:

Originally Posted by sanlyn:
The flicker in your rainbow sample (it's not rainbows) is from gross oversaturation of the V channel, with mistracking making it worse. Red exceeds RGB 255, so it looks discolored and even worse when played back in RGB used by all playback devices.
Should I have tried to check and attempt to account for this during capture? I used VirtualDub's histogram to make sure the Y values were valid per your guide but I suppose there isn't an easy way to do this for color/it's easier to just use ColorYUV to fix later?

Yes, it's difficult to judge oversatuation with a luma-only histogram. It's also difficult to adjust the contrast of a single channel during capture using anything but shop-grade industrial software. Usually it would be addressed in post processing.

Quote:

Originally Posted by bilditup1 (Post 49730)
Did I miss something? Was there 'progressive material' in video production thirty years ago? Were there non-linear editors?

Yes to both. Ever heard of film? How many VHS tapes and TV shows were made from movie film (and still are), a progressive medium invented in the 1880's. People have been making interlaced slide shows from progressive images since the days of Microsoft DOS and early Macs. Both of the posted samples appear to be made from still shots. If you load the samples in virtualDub and mount a deinterlacer, you'll see that the images will display different distortions per field, but they don't basically move. If you were to make a slide show from photos, all of your frames would look progressive even if you encode as interlaced; the two fields in such interlaced frames will be the same image.

Quote:

Originally Posted by bilditup1 (Post 49730)
Do you recommend fast-forward/rewinding several times in order to get a good pack? I didn't know that that was the process. Do you think I should cap again?

You might get smoother tape flow and fewer disaster points. Considering the condition of the tapes, I'd say that just about anything might help, but you'll still get glitches. The fewer the better, and easier.

Quote:

Originally Posted by bilditup1 (Post 49730)
I also have an Elite Video BVP4+ that is not currently in the chain - I was under the impression (IIRC from something you said a while ago) that this was only to be used as a last resort so it usually stays off to the side. Do you think this video warrants its use? Using Color Tools per your tip below I found that G and B also hit 255.

I have a BVP-4, too, but you can't use it to lower bright contrast in a single color without affecting values in the other areas.

I don't know whose histogram you're using, but in the original avi, G and B peaked about RGB 200 and red just barely hit RGB 255 at the end. That's not the same thing as red stampeding up the right=hand wall in the original.
Code:

Quote:

Originally Posted by bilditup1 (Post 49730)
Quote:

Originally Posted by sanlyn
You could probably load the rainbow clip results into VirtualDub to fix up the amount of red you want
So, how is that done, using either VDub or AVS? How is that different from simply undoing what ColorYUV() does in your script?

You can add red and/or adjust G and B in lower and middle ranges without affecting brights in RGB. The ColorYUV adjustment was mainly for overextended brights but also affects lower and middle red values. The ability to work three channels separately and in strictly targeted tonal ranges is an advantage with RGB.

When you view the output of an avisynth script in Virtualdub you can add VDub filters to the output at the same time. But remember that when doing so you should use ConvertToRGB32(interlaced=true or false) from YV12 video. I didn't use any VDub filters. The output from the scripts was saved with Lagarith as YV12.


Quote:

Originally Posted by bilditup1 (Post 49730)
Code:

ChromaShift(C=-2,L=-4)
How does one tell if the chroma is in the wrong place, and by how much?

The only way I know is by looking at images. Try the rainbow sample and note the bright halo under the bridge due to downward chroma shift. There's also horizontal shift, but it's more smeared than a strong shift -- shifting chroma sideways didn't help much, so I used a very minor leftward shift.

Quote:

Originally Posted by bilditup1 (Post 49730)
Code:

SeparateFields().SelectEvery(4,0,3).Weave()
Let me see if I understood what this is doing. From QTGMC a 60 frames per second progressive clip emerged. Here, SeparateFields() splits each frame into two fields (120 fields/sec). SelectEvery() says that of every four of these fields, the 0th and the 3rd fields should be kept. Then Weave() reassembles our interlaced frames. Did I get it right? Is this the preferred way to reinterlace? Is using the superfast preset on QTGMC sufficient to convert to progressive for processing purposes?

SelectEvery() is pretty much the standard way, and Avisynth does it correctly. The faster presets in QTGMC do far less denoising and other work than the slower settings. Much faster, too.

Quote:

Originally Posted by bilditup1 (Post 49730)
Quote:

Originally Posted by sanlyn
With the "dropout_no-audio.avi" sample, you don't have enough good frames for any version median or averaging routines to get more than one or two good frames. My advice would be to run the FixRipsP routine and try to find as clean a frame as you can, then loop that frame for the number of frames you want in the video. In other words, the only clean version you'll get from this disaster is to remake the clip from a good frame. In this case one of the cleanest frames I found was frame #1, the second frame after running FixRipsP.
Ha, pretty brilliant. I have other dropouts like like this where this is not the case, but I don't think I have good frames there either. Presumably I'm SOL for those?

This worked because there's no motion (not counting all the distortion). If you have similar title cards in the same shape, you can always give the FixRips routine a try to see what you get. I didn't know until I tried it. With motion, you have to experiment and live with what you can get. We had a badly damaged video thread a few weeks ago where 60% or 70% was the best result.


Quote:

Originally Posted by bilditup1 (Post 49730)
When is yadif used here below? Did you mean to put in a call to it somewhere? You said you're using FixRipsP instead of A so presumably. Is there any reason you opted for Yadif for this?

Code:

SeparateFields()
Why do the fields needs to be separated here? So that ChromaShift works?

Good catch. That yadif line can be deleted. I tried it in an earlier version of the script and used comment marks to disable it, but forgot to remove it when I posted the script today. I just removed the "#" but kept the line. Silly me.

SeparateFields() works OK for horizontal chroma shift, not OK for up or down. You might also notice that the FixRipsP routine doesn't want interlaced fields. The FixRipsA version for interlaced video uses SeparateFields() internally.


Quote:

Originally Posted by bilditup1 (Post 49730)
Code:

ChromaShift(C=-2)
You didn't shift it horizontally this time - I guess I can't count on these shifts being consistent with a single video?

"C" values are horizontal shift instructions. "L" is for vertical shift. That code is a horizontal shift toward the left -- for all the good it did, which wasn't much. But sometimes every litttle bit helps.

Quote:

Originally Posted by bilditup1 (Post 49730)
Code:

MergeChroma(aWarpSharp2(depth=30))
What do we gain over just running aWarpSharp2 directly? A more subtle, accurate sharpening?

You can use other sharpeners that way if you want, but aWarpSharp2 tends to tighten color around edges. That's another way of saying that it often changes the shapes of things, subtley but visibly. That's where its name comes from. MergeChroma makes it work on chroma only. As a regular sharpener it often looks over sharpened and a little "etched" IMO.

Quote:

Originally Posted by bilditup1 (Post 49730)
Quote:

Originally Posted by sanlyn
By the way, what's that garbage on the left-hand border of the samples ? ? ?
Yeah, I thought that looked weird - was hoping you could tell me

Maybe someone knows. I don't have the slightest. That's one weird-tracking tape.

msgohan 06-12-2017 08:43 PM

Are the rainbows only in the first several seconds at the start of a recording? https://www.repairfaq.org/REPAIR/F_vcrfaq6.html

I believe the "garbage" on the left border is being added by the TBC.

bilditup1 06-12-2017 08:45 PM

Quote:

Originally Posted by sanlyn (Post 49732)
The main routine in themnaster1's script is similar to the FixRips routines, just not nearly as aggressive but keeps more detail.

Got it, thanks.

Quote:

A border of black pixels takes up a fraction of the databits as the same area of image data, because the black pixels are a single color value and they all look alike. Image data is far more complex than a black border. You can encode several seconds of pure black starter frames for a mere smidgen compared to several seconds of "real" image data. The "dropout" mp4 has 136 full color frames, but all of them are alike: look at the small encoded file size. I've seen JPG picture files bigger than that.
Right, I understand that black pixels themselves shouldn't take up much space. I was just under the impression that the hard transition from picture to black borders can affect encoding efficiency. Probably this impression is outdated or doesn't matter too much :/

Quote:

Videophiles and pros would preserve the original core image content as much as possible, for both archival and geometric reasons. Users differ. Adding black pixels is less trouble and angst than resizing, IMHO. But they're your videos.
OK, noted.

Quote:

Yes, it's difficult to judge oversatuation with a luma-only histogram. It's also difficult to adjust the contrast of a single channel during capture using anything but shop-grade industrial software. Usually it would be addressed in post processing.
Got it.

Quote:

Yes to both. Ever heard of film? How many VHS tapes and TV shows were made from movie film (and still are), a progressive medium invented in the 1880's. People have been making interlaced slide shows from progressive images since the days of Microsoft DOS and early Macs. Both of the posted samples appear to be made from still shots. If you load the samples in virtualDub and mount a deinterlacer, you'll see that the images will display different distortions per field, but they don't basically move. If you were to make a slide show from photos, all of your frames would look progressive even if you encode as interlaced; the two fields in such interlaced frames will be the same image.
Right, I meant though that video is inherently an interlaced format, and this was shot on video, that's why I didn't understand the relevance of mentioning progressive content or an NLE. I'm also not used to the idea of describing, er, non-electrical? visual media like photos or film as either 'interlaced' or 'progressive', or why different processing or effects would need to be applied to a video taken of a photo than a video taken of any other static, flat object (like a painting or something).

Quote:

You might get smoother tape flow and fewer disaster points. Considering the condition of the tapes, I'd say that just about anything might help, but you'll still get glitches. The fewer the better, and easier.
So that's a yes on a re-cap? OK.

Quote:

I have a BVP-4, too, but you can't use it to lower bright contrast in a single color without affecting values in the other areas.
Thanks, noted.

Quote:

I don't know whose histogram you're using, but in the original avi, G and B peaked about RGB 200 and red just barely hit RGB 255 at the end. That's not the same thing as red stampeding up the right=hand wall in the original.
I meant in other points in the cap, not this small excerpt. I may try to post other portions of it but I'm not sure what we'd be comfortable with.

Quote:

You can add red and/or adjust G and B in lower and middle ranges without affecting brights in RGB. The ColorYUV adjustment was mainly for overextended brights but also affects lower and middle red values. The ability to work three channels separately and in strictly targeted tonal ranges is an advantage with RGB.

When you view the output of an avisynth script in Virtualdub you can add VDub filters to the output at the same time. But remember that when doing so you should use ConvertToRGB32(interlaced=true or false) from YV12 video. I didn't use any VDub filters. The output from the scripts was saved with Lagarith as YV12.
Noted. So, which filters, for either AVS or VDub, should be used for the purpose of playing with color saturation?

Quote:

The only way I know is by looking at images. Try the rainbow sample and note the bright halo under the bridge due to downward chroma shift. There's also horizontal shift, but it's more smeared than a strong shift -- shifting chroma sideways didn't help much, so I used a very minor leftward shift.
I'll look at that later tonight. Outside of haloing, is there any other telltale sign?
Ultimately you're doing trial and error and going through frame-by-frame to figure out the right values to shift by, right?

Quote:

SelectEvery() is pretty much the standard way, and Avisynth does it correctly. The faster presets in QTGMC do far less denoising and other work than the slower settings. Much faster, too.
Noted, thanks.

Quote:

This worked because there's no motion (not counting all the distortion). If you have similar title cards in the same shape, you can always give the FixRips routine a try to see what you get. I didn't know until I tried it. With motion, you have to experiment and live with what you can get. We had a badly damaged video thread a few weeks ago where 60% or 70% was the best result.
Let me make sure I understood - FixRips should be used in attempting to deal with rolling dropouts as well? 60% sounds a lot better than ya know, nothing :/

Quote:

Good catch. That yadif line can be deleted. I tried it in an earlier version of the script and used comment marks to disable it, but forgot to remove it when I posted the script today. I just removed the "#" but kept the line. Silly me.
Ah ok great.

Quote:


SeparateFields() works OK for horizontal chroma shift, not OK for up or down. You might also notice that the FixRipsP routine doesn't want interlaced fields. The FixRipsA version for interlaced video uses SeparateFields() internally.

So you used it mainly to feed FixRipsP what it expects, OK. Does it matter that it was put before the ChromaShift line?

Quote:

"C" values are horizontal shift instructions. "L" is for vertical shift. That code is a horizontal shift toward the left -- for all the good it did, which wasn't much. But sometimes every litttle bit helps.
Yup, this was a brainfart - I went back and corrected it right after posting, but apparently not soon enough heh.

Quote:

You can use other sharpeners that way if you want, but aWarpSharp2 tends to tighten color around edges. That's another way of saying that it often changes the shapes of things, subtley but visibly. That's where its name comes from. MergeChroma makes it work on chroma only. As a regular sharpener it often looks over sharpened and a little "etched" IMO.
Thanks for explaining this.

Quote:

Maybe someone knows. I don't have the slightest. That's one weird-tracking tape.
And there's plenty more where that came from, I'm afraid...

Quote:

Originally Posted by msgohan (Post 49734)
Are the rainbows only in the first several seconds at the start of a recording? https://www.repairfaq.org/REPAIR/F_vcrfaq6.html

No, they continue throughout this sort of 'photo montage' section, and sometimes return later on in the tape. This is not a taped-over thing, either, it was a commissioned video-taping of a family event.

Quote:

Originally Posted by msgohan (Post 49734)
I believe the "garbage" on the left border is being added by the TBC.

...yikes. Do I need to have it checked out? Should I try a cap without the TBC?

sanlyn 06-13-2017 11:02 AM

Uh-oh, lots of questions and another long one. Sorry for the delay. Answering your posts, in no particular order:

Quote:

Originally Posted by bilditup1 (Post 49735)
I meant though that video is inherently an interlaced format, and this was shot on video, that's why I didn't understand the relevance of mentioning progressive content or an NLE. I'm also not used to the idea of describing, er, non-electrical? visual media like photos or film as either 'interlaced' or 'progressive', or why different processing or effects would need to be applied to a video taken of a photo than a video taken of any other static, flat object (like a painting or something).

Well, things have changed with BluRay and HD, which has purely progressive formats for film sources as well as the older frame structures. Yesteryear and today as well, filmed source is progressive with pulldown applied, a format that has both progressive and interlaced frames. Many progressive videos are encoded with interlace flags. Some external boxes will play everything as interlaced.

There can be problems with applying transition effects like dissolves and fades using interlaced sources. The effect if applied with interlaced video applies the same fade or dissolve to both fields at the same time. When the video is deinterlaced or inverse telecined, the same effect is now duplicated on multiple frames instead of just one, so it can look a little weird and seldom goes back together in the same way when reassembled.

Many people process images for slide shows in different ways, usually as progressive double-rate source, then interlace later. That's for picky people unless they don't expect to reprocess the results later. Animation goes through all sorts of convolutions: created at 15fps progressive then adding duplicate frames to get 25 or 30fps. Some are 23.976fps originals with pulldown for various playback formats. And you find all sorts of sloppy conversions that are impossible to repair.

Quote:

Originally Posted by bilditup1 (Post 49735)
Quote:

You might get smoother tape flow and fewer disaster points. Considering the condition of the tapes, I'd say that just about anything might help, but you'll still get glitches. The fewer the better, and easier.
So that's a yes on a re-cap? OK.

You can try capturing a few segments and see if there's a difference. If not, you haven't wasted time capturing the entire video.

Quote:

Originally Posted by bilditup1 (Post 49735)
Quote:

in the original avi, G and B peaked about RGB 200 and red just barely hit RGB 255 at the end. That's not the same thing as red stampeding up the right=hand wall in the original.
I meant in other points in the cap, not this small excerpt. I may try to post other portions of it but I'm not sure what we'd be comfortable with.

VHs is terrible when it comes to levels and color balance, which can vary from shot to shot even with retail issues. Breaking the video into separate segments, processing each as required, then assembling them later, is common practice. A pain in the neck, to be sure. It sure doesn't make capture easy, where you have to set for worst-case scenarios and tweak later, or in some cases recapture a maverick segment.

Some videos are nightmares when it comes to this. Referring to re-capture, I have a long-term project of a godawful and aging early 1990's tv broadcast of a movie that's not in a decent print anywhere. I made 6 full captures and several partials using two different players. The originals are on an external drive. I'm still today pulling segments from different captures, with different filters and settings are needed for each segment. This takes forever. Good thing most of my old tapes weren't that bad, but many were.

Quote:

Originally Posted by bilditup1 (Post 49735)
Quote:

The only way I know is by looking at images. Try the rainbow sample and note the bright halo under the bridge due to downward chroma shift. There's also horizontal shift, but it's more smeared than a strong shift -- shifting chroma sideways didn't help much, so I used a very minor leftward shift.
I'll look at that later tonight. Outside of haloing, is there any other telltale sign?
Ultimately you're doing trial and error and going through frame-by-frame to figure out the right values to shift by, right?

The downward shift looked the same on all frames, as it usually does. It's not technically a halo -- the colors below the bridge are shifted downward. Shifting up didn't cure the problem 100%. Chroma shifts are pretty obvious, but the chroma smear to the right is partly shift and partly just smear. You can see it in the main tower on the left. No easy fix for that.

BTW, I'll be crossing that bridge for real in about a month. :)

Quote:

Originally Posted by bilditup1 (Post 49735)
Quote:

This worked because there's no motion (not counting all the distortion). If you have similar title cards in the same shape, you can always give the FixRips routine a try to see what you get. I didn't know until I tried it. With motion, you have to experiment and live with what you can get. We had a badly damaged video thread a few weeks ago where 60% or 70% was the best result.
Let me make sure I understood - FixRips should be used in attempting to deal with rolling dropouts as well? 60% sounds a lot better than ya know, nothing :/

It depends on the video. I've seen it clean up some pretty rotten stuff, depending on the rolling frequency, the distance and number of good frames, etc. I saw another, much more complicated example on doom9 but I'd have to go back several years to find that post -- and it's not always an easy forum to search in many respects; they keep changing the links in their archives and Google sarches can't find the new pages. Many doomn9 links I saved years ago don't work any more. Drat!

Quote:

Originally Posted by bilditup1 (Post 49735)
Quote:

SeparateFields() works OK for horizontal chroma shift, not OK for up or down. You might also notice that the FixRipsP routine doesn't want interlaced fields. The FixRipsA version for interlaced video uses SeparateFields() internally.
So you used it mainly to feed FixRipsP what it expects, OK. Does it matter that it was put before the ChromaShift line?

In this case it didn't matter.

Quote:

Originally Posted by bilditup1 (Post 49735)
And there's plenty more where that came from, I'm afraid...

Join the club. I've had to use tricks like combining clips from multiple frames from multiple captures, which I mentioned earlier. I recently posted an example of combining available good frames from two captures of a nightmare tape. And, yes, I did try versions of FixRips and they didn't work so well, so I had to do manual workarounds. I even made an Excel spreadsheet of every frame for hundreds of frames, with notes on which frames could be used and which couldn't.

Below are links to previously short MP4 videos from two uncut lossless captures, with a link to the finished version.
Capture 1: VCR = Panasonic AG-1980, AVT-8710 tbc.
Capture 2: VCR = Panasonic PV-S4670, Panasonic ES10 line tbc pass-thru, AVT-8710 tbc.
Both capped with an ATI All in Wonder 9600XT with VirtualDub. The original tapes were recorded off bad cable Tv signals at slow and at noisy EP 6-hour speed in 1991 on a very cheap 2-head RCA VCR. Lots of chroma noise, comets, fuzz, smeared chroma on edges, weird borders.

from capture 1:
http://www.digitalfaq.com/forum/atta...living10_b1mp4

from capture 2, the very end deleted (all bad frames):
http://www.digitalfaq.com/forum/atta...iving11_b1amp4

final version, denoised, color corrected. Still a few blips in the starting frames.
http://www.digitalfaq.com/forum/atta...p_mva-finalmp4

The Avisynth script to combine the frames is in this post:
http://www.digitalfaq.com/forum/vide...html#post47174
There are also lots of nightmare problem examples in that thread and some hot debate. You can learn a ton by browsing old threads, as most of us do. And the final has borders. I pan to rework some of this. Plug in the coffee maker.


Quote:

Originally Posted by bilditup1 (Post 49736)
Quote:

Originally Posted by msgohan (Post 49734)
I believe the "garbage" on the left border is being added by the TBC.

...yikes. Do I need to have it checked out? Should I try a cap without the TBC?

Framing problems will be worse without the TBC. The borders will have to be cropped and evened-up anyway, regardless.

Quote:

Originally Posted by bilditup1 (Post 49735)
So, which filters, for either AVS or VDub, should be used for the purpose of playing with color saturation?

Would take a long time to answer that one. For VirtualDub some popular filters are ColorMill (similar to color wheels in expensive apps, but uses sliders), gradation curves (similar to curves filters in Photoshop pro, Premiere pro, After effects), and Donald Grafts's Hue/saturation/intensity. You're probably familiar with analysis tools like ColorTools, or handy desktop pixel value readers like Csamp.

An old posted example from my previous life for using Csamp to read pixel values and curves to adjust color is here: http://www.digitalfaq.com/forum/vide...html#post38384
Tutorials for using a curves filter are at the download site (http://members.chello.at/nagiller/vdub/index.html) and in many free internet tutorials for using Photoshop Pro (example: http://www.cambridgeincolour.com/tut...hop-curves.htm).

A thread with many examples of using histograms and other tools, in which you'll find a post about ColorMill and curves, and several other posts in the same thread: http://www.digitalfaq.com/forum/vide...html#post42315.

Donald's Graft's HSI filter is pretty much self-explanatory by looking at it. Be carefiul, as very slight adjustment can often have strong effects.

Avisynth has several color and level adjustment filters, you'll spend a week fooling with them. ColorYUV, Levels, Tweak, RGBAdjust. They're more difficult to use, but they're essential. Working with Virtualdub in RGB is great for tweaking color problems that you find in YUV, which should usually be corrected first before moving to RGB. Avisynth's Histogram function has many options for analysis in YUV.

koberulz 06-13-2017 11:18 PM

What changes have you made from lordsmurf's filter for FixRips?

sanlyn 06-14-2017 10:26 AM

2 Attachment(s)
I changed the opening lines of the script to accept a clip that is either interlaced or non-interlaced. The "A" or interlaced version accepts an interlaced clip and uses "SeparateFields() and Weave() internally to work with interlaced material. The "P" or progressive version accepts a non-interlaced clip which has been either deinterlaced with a deinterlacer (QTGMC, yadif, etc.) or non-interlaced by using SeparateFields() before calling the .avs "P" function. Thus, the "P" version simply eliminated SeparateFields() and Weave() from the code of the original version.

You have the original, which was posted by lordsmurf as "studio1b.avs" in this post: http://www.digitalfaq.com/forum/vide...html#post45915. Earlier in this current thread I posted an "A" mod for interlaced video in post #2, and a "P" mod for non-interlaced video in the same post. But I found some typos in both scripts, so I'm attaching two revisions below. The revisions work a little better, the old ones might have problems with some videos. The new versions are "FixRipsA2.avs" and FixRipsP2.avs, attached.

You can call FixRipsP2 and send it a progressive video in this manner:
Code:

Import("path/to/Avisynth/plugins/FixRipsP2.avs")
AviSource("my_video.avi") #or whatever source
AssumeTFF() #or BFF
ConvertToYV12(interlaced = true or false)
QTGMC(whatever settings) #or yadif, or whatever
FixRipsP2()
...more non-interlaced processing if necessary...
...more non-interlaced processing if necessary...
SeparateFields().SelectEvery(4,0,3).Weave()  #<- reinterlacve if necessary

You can call FixRipsP2 and send it a field-separated video in this manner:
Code:

Import("path/to/Avisynth/plugins/FixRipsP2.avs")
AviSource("my_video.avi") #or whatever source
AssumeTFF() #or BFF
ConvertToYV12(interlaced = true or false)
SeparateFields()
FixRipsP2()
...more non-interlaced processing if necessary...
...more non-interlaced processing if necessary...
Weave()

You can call FixRipsA2 and send it an interlaced video in this manner:
Code:

Import("path/to/Avisynth/plugins/FixRipsA2.avs")
AviSource("my_video.avi") #or whatever source
AssumeTFF() #or BFF
ConvertToYV12(interlaced = true)
FixRipsA2()

The A2 and P2 versions allow you to perform other operations on a video apart from having to paste all of the original code into your script.

To use lordsmurf's original posted script you must be working with non-progressive video and copy the entire original script as part of your code, either as a complete script in itself or placed in a suitably logical part of your code. The original versions from lordsmurf and other sources have an opening section along with three internal functions (MinBlur, Median2, and TMedian2), which must be included somewhere in your script. In FixRipsA2.avs and FixRipsP2.avs the three internal functions are an integral part of the .avs scripts, so they don't have to be copied again in your code.

If you browse the A2 and P2 versions you'll see that the first several opening lines are different in each version.

msgohan 06-15-2017 09:27 AM

Quote:

Originally Posted by sanlyn (Post 49748)
Framing problems will be worse without the TBC. The borders will have to be cropped and evened-up anyway, regardless.

Yeah, I wouldn't bother re-capturing without the TBC unless you want to satisfy your curiosity as to whether it's really the cause. Though it's possible there is an internal adjustment like the BVP that would remove the artifact. That edge would probably just be black without the artifact anyway.

cedric75018 02-22-2018 08:52 AM

Hello ! Thank you for those wonderful scripts ! I've tested P2 on some footage and it did magic, but sometimes on fast moving objects we can see some artefacts (like motion artefacts). Do you know why ?

I've also noticed that the denoise also destroy some pictures like for example if you have birdes which are only few pixels, they can disappear.

As the script seems very complicated do you know if it is possible to adjust some parameters to reduce artefacts (fast moving objects) ? Thank you for your help !

sanlyn 02-23-2018 01:10 AM

Quote:

Originally Posted by cedric75018 (Post 52988)
Hello ! Thank you for those wonderful scripts ! I've tested P2 on some footage and it did magic, but sometimes on fast moving objects we can see some artefacts (like motion artefacts). Do you know why ?

Because the filter creates new interpolated frames using information available in the current frames and by averaging (and guessing) where new objects belong. Essentially new frames are created from old ones by methods which are themselves prone to errors and omissions, in the same way that video and audio encoding omits data.

Short version: You can't have everything.

I've often had to make two versions of really godawful video, one processed with an industrial-strength pile driver of a cleaner and another with a less destructive filter that's easier on less-damaged areas, or even some frame by frame spot patching over several frames or frame groups. Then I'd take the best of each version and combine them using something like RemapFrames (http://avisynth.nl/index.php?title=R...es&redirect=no). Does this take a long time and some nail-biting? Yes. Sometimes you take your losses or live with the originals. And some things just can't be fixed.

cedric75018 02-23-2018 04:59 AM

Hell sanlyn, I totally agree with you about replacing some frames with frames from a "less filtered version". The thing is that the script is soooo complicated to me that I don't understand how to create a simple/light/non-destructive version for parts where the interpolation filter creates bad frames :(.

sanlyn 02-23-2018 07:49 AM

Usually one would require 3 separate scripts, rather than go bananas trying to do everything in one swoop. Script #1 processes and saves version 1, script #2 processes and saves version 2. Then script #3 opens version 1 and version 2 and combines sections or even individual frames from each, and creates and saves version 3.

Some time go I had different captures from the same damaged tape. Each time the bad tape played, a different video head would pick up a different clean interlaced field among all the damaged frames and fields. About 8 seconds of video required multiple scripts and intermediate files. The video was a telecined movie, but inverse telecine removed some good frames and fields and kept some bad ones. So I had to keep all the fields, good and bad, because some fields were good in one capture but bad in the other. Then I had to clean/filter each capture and create an Excel spreadsheet that listed the images from each file in two columns. Each Excel column showed an X for a bad image and a number for good ones. Add to that, the filtered captures had a different number of frames because some frames in one capture or another would be dropped or duplicated. Then I wrote a script to select the good fields from each capture and combine them into a third result that included all the frames for the entire movie sequence, both good and bad. Only the best of two sets of images were selected and overlaid onto the third incoming file, to crete a new output file of only-good images. The last step was to open a capture that contained all the original audio for the full 8-plus seconds and overlay that sound track onto the finished file.


All times are GMT -5. The time now is 06:46 AM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.