digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Restore, Filter, Improve Quality (https://www.digitalfaq.com/forum/video-restore/)
-   -   Three different kind of chroma noises? (https://www.digitalfaq.com/forum/video-restore/10446-three-different-kind.html)

benzio 03-21-2020 05:22 AM

Three different kind of chroma noises?
 
1 Attachment(s)
Hi!

I'm trying to fix these three kind of chroma noises (see attachment)
In the video you have in the top row the U channel and in the bottom row the V channel. First column even fields, second column odd fields.

I'm quite confident that there isn't "one size fits all"-filter, but maaaaybe..!

1) Are these kind of chroma noises some kind of "known noises"?
2) Is there some good spatio-tempora, maybe also motion-compensated, chroma noise removal?
3) These are results of a median of multiple captures. So I'm confident that the noise is actually on the tape and was not introduced by the capture process, but is there a way to remove this kind of noise from the source? Is the comb filter useful for this case?

Thanks!

scharfis_brain 03-21-2020 08:46 AM

just ignore the chroma issues and throw QTGMC() at it:
Code:

qtgmc(Blocksize=16, ChromaMotion=false, RepChroma=false,tr2=3)
That will magically solve any chroma noise issues. And produce a nice progressive fullrate video without noise.

Additionally you might want to do some chroma sharpness improvement by sucking the chroma-transition to a luma-transition (Chroma Transient Improvement = CTI):
Code:

awarpsharp2(chroma=4, thresh=120,depth=0,depthc=32)

lordsmurf 03-21-2020 09:18 AM

Quote:

Originally Posted by scharfis_brain (Post 67403)
just ignore the chroma issues and throw QTGMC() at it:
That will magically solve any chroma noise issues.

Some? Maybe.
All? No.

Quote:

Originally Posted by benzio (Post 67398)
I'm trying to fix these three kind of chroma noises (see attachment)

A full-color sample clip is required, not just a clip of 1 channel.

scharfis_brain 03-21-2020 12:27 PM

What I see from his sample is:

- chroma shimmering due to composite signal input, when the tape was recorded
-> this is being fixed bei QTGMC

- chroma noise caused by the color-under kind of modulation of VHS
-> this is also being fixed by QTGMC

benzio 03-22-2020 06:29 PM

6 Attachment(s)
Well... I tried to use QTGMC(), but honestly I don't like for two reasons:

1) I don't want to deinterlace. Ok, maybe it could be used for purposes beyond deinterlacing, but all the filters that are inside it are there for the reason of give a better deinterlaced output.
2) I prefer not to use filters that do things "magically". I want to know what's going on as much as possible, and QTGMC is really complex and uses many filters that are difficult to completely understand.




I've posted the clips of the three samples and my restoration attempts, that gave me a lot of satisfaction.

In the first clip I see 6 main problems:
1) The colors... very unnatural... ---> I've adjusted them a bit with curves, paying attention not to lose detail.
2) The general chroma noise -> Just a little bit of CCD in Virtualdub was enough
3) A strange Chroma Bleed that is not really a Chroma Bleed: it's not really shifted horizontally. I've tried to adjust with chroma shift but then it bleeds on the other side. The Chroma is very blurred, is not shifted. That's why it bleeds in every direction. I don't know which are the causes of this problem (those interest me) and how to fix this.
4) A huge wave, flowing horizontally, in both the chroma channels. Why do you say that is caused by composite video? I used s-video for capture and I captured many times, so i'm quite confident that the problem is on the tape!
I managed to fix that in avisynth in this way:
Code:

A = avisource("RESTAURATO.avi")

UMASK = overlay (utoy(A), vtoy(A).invert(), mode="blend", opacity=0.5).levels(128,1,255,0,128)
UCORRETTO = overlay(utoy(A).converttorgb32, UMASK.converttorgb32, mode="subtract", pc_range=true).converttoyuy2

VMASK = overlay (utoy(A).invert(), vtoy(A), mode="blend", opacity=0.5).levels(128,1,255,0,128)
VCORRETTO = overlay(vtoy(A).converttorgb32, VMASK.converttorgb32, mode="subtract", pc_range=true, mask=VMASK.levels(4,1,8,0,255).invert).converttoyuy2

FINAL = YtoUV(UCORRETTO, VCORRETTO, A)

FINAL

The idea is that the wave is present in both chroma channels, with inverted peaks (why?? what caused it?). So I emphazize what is present in equal way in both channels by blending them (U and the inverse of V).
Then I correct U by subtracting that wave, and in similar way I do to the V channel, but with a mask (because in the channel V there are more relevant areas affected by that subtraction.

5) Some dropouts that are present on the tape (not fault of my vcr). -> I leaved there for the moment
6) A little "jump" of the image. -> Maybe caused by tape transportation or by my ES10? Those are not so rare in my captures. They occur very often if I have both the JVC TBC and the ES10, but this is a wrong practice I abandoned. They occur more rarely since I kept only the ES10 in the chain. I could stabilize them easily if I would... I don't care for the moment



In the second clip there are also 5 evident problems:
1) Colors -> Solved with curves
2) Huge chroma noise due to the few light -> Solved with a bit of CCD and a bit of CNR in Virtualdub
3) The shot was made with long exposition, so the hands of the director are "flowing" with a trail of light. I don't think it's possible to do something for that. Am I right?
4) Same chroma blurred bleed as in the first clip.
5) A very strange wavy pattern in the chroma channels that affects the lights. The crazy chroma totally invert itself every field. -> I solved that in the same way i've solved the wave in the first clip. But I ask myself what may have caused it.



In the third clip there are three problems I notice:
1) General chroma noise -> Solved with a bit of CCD and a bit of CNR in Virtualdub
2) A layer dirt on the lens. I'm very proud of my avisynth script to solve that! I've isolated the layer keeping it's pattern in a place everytime a dark area was passing under it, and when I had the full pattern I've slightly subtracted it from the clip. This is the code I've used:
Code:

A1 = avisource("RESTAURATONECHROMA.avi").convertToRGB32
A2 = A1.grayscale
A3 = A2.fastblur(3)
A4 = uu_mt_blend(A2, A3, mode="subtract") #Rimangono i punti sopra la media
A5 = overlay(A4, A4.fastblur(3), mode="subtract", pc_range=true)
A6 = overlay(A5, A5.BlankClip(), mode="darken", mask=A2.levels(12,1,32,0,255).fastblur(3).levels(0,1,60,0,255))
A6 = overlay(A6, A5.fastblur(3), mode="subtract", pc_range=true)
A7 = A6.levels(0,1,32,0,255).temporalAverage(75, "forward") #Temporalaverage is one my function that averages the next n-frames in the clip
RESULT = Overlay(A1, A7, opacity=0.7, mode="subtract", mask=A2.levels(16,3,255,0,255).invert, pc_range=true).convertToYuy2
RESULT

I think that the result is excellent.

3) The last problem is that a part of the chroma noise in the dark areas spreaded in the luma channel (why? how it's possible?), causing huge noise there, that was not possible to remove with noise removal tools without losing detail. I've tried firstly to remove it by slightly subtracting from the luma the noise taken from the chroma channels but that approach was not working for some reason, so I removed them by making a temporal average of 2 frames just in the very dark areas (under level 3 of luma). I've lost detail, even if almost not perceivable at all (I have the obsession to not lose any "real" information so I was not happy in that), but the job was done.


Thanks to Lord Smurf and everyone that makes this forum a place to learn a lot of things!

themaster1 03-23-2020 03:07 AM

Where can i get this temporalAverage filter, i can't find it for some reason. I want to try your script out of curiosity.

benzio 03-23-2020 05:17 AM

Quote:

Originally Posted by themaster1 (Post 67448)
Where can i get this temporalAverage filter, i can't find it for some reason. I want to try your script out of curiosity.

You can't find because I wrote it and never published. I use it very often in my scripts, therefore i've putted it in the plugin folder of avisynth with .avsi extension.
Here is the code... Very simple: it averages a lot of consecutive frames. A big N requires lot of ram.
It exaltes the zones where "things" are fixed for a long time.
It uses this filter: http://avisynth.nl/index.php/Average

Code:

function temporalAverage(clip source, int frames, string mode){
return tAverage(source, source, frames, 0, mode)
}

function tAverage(clip old, clip new, int frames, int current, string mode){
averageforward = average(new, 1.0-(float(1)/(current+1)), old.trim(current,0)+old.trim(0,current+1), float(1)/(current+1))
averagebackward = average(new, 1.0-(float(1)/(current+1)), old.trim(0,current+1)+old, float(1)/(current+1))
returnvalue = (mode == "backward") ? averagebackward : old
returnvalue = (mode == "forward") ? averageforward : returnvalue
return (current == frames+1) ? returnvalue : tAverage(old, returnvalue, frames, current+1, mode)
}


scharfis_brain 03-23-2020 12:37 PM

The problem with the filters you're using is: they are just motion adaptive but not motion compensated.
Thus you're wasting an enormous potential amount of image quality gain.
I would stay away from temporal filtering in VirtualDub. Most can be done more effectively in AVISynth with masktools and mvtools.

Also working in interlaced domain whilst preserving the field structure needs to be done with special care, sometimes is impossible to do properly and often is a PITA.

QTGMC is my swiss army knife here, because it introduces a heck of stability to the video, depending on how you set it's parameters. Have a look at the script. It explains stuff quite well.

Even if QTGMC has progressive output, you can still obtain an interlaced video afterwards by throwing away half the lines:

Code:

assume?ff()
QTGMC()
separatefields().selectevery(4,0,3).weave()

But I won't do that. Interlacing is bad.
Nearly all realtime deinterlacers in TVs suck.
I'd go straight to 720p50 / 720p60 or beyond.
This compresses better anyways.

benzio 03-23-2020 01:09 PM

Quote:

Originally Posted by scharfis_brain (Post 67457)
The problem with the filters you're using is: they are just motion adaptive but not motion compensated.
Thus you're wasting an enormous potential amount of image quality gain.
I would stay away from temporal filtering in VirtualDub. Most can be done more effectively in AVISynth with masktools and mvtools.

Also working in interlaced domain whilst preserving the field structure needs to be done with special care, sometimes is impossible to do properly and often is a PITA.

QTGMC is my swiss army knife here, because it introduces a heck of stability to the video, depending on how you set it's parameters. Have a look at the script. It explains stuff quite well.

Even if QTGMC has progressive output, you can still obtain an interlaced video afterwards by throwing away half the lines:

Code:

assume?ff()
QTGMC()
separatefields().selectevery(4,0,3).weave()

But I won't do that. Interlacing is bad.
Nearly all realtime deinterlacers in TVs suck.
I'd go straight to 720p50 / 720p60 or beyond.
This compresses better anyways.



I've read that there is a way to force QTGMC to be loseless (using loseless=true) in a way such that if you do QTGMC().separatefields().selectevery(4,0,3).weave( ) you obtain the same EXACT video you started with... I've tried it and for now I'm unable to make it work properly.

I understand the potentiality of motion compensation and maybe I'll use them, after I'll take time to study QTGMC and use it with awareness.

I care about quality but I'm not happy if I obtain quality "magically"! And I care to preserve the source as much as possible, even in the output format.

I don't want to save the video 720p50 (actually I do but not for archiving! I do that when I have to upload a video online or give it compressed to someone) forgetting about the fact that the source was interlaced...
Using the same arguments I could send the video to Topaz Video Gigapixel and let them use an AI to transform my vhs capture in 4k! The results are really ashtonishing! But I don't want to do it for respect to the source.

I'm a hobbist: I obtain satisfaction not by the result in first instance, but from the process :-)

msgohan 03-23-2020 01:51 PM

Quote:

Originally Posted by benzio (Post 67444)
3) A strange Chroma Bleed that is not really a Chroma Bleed: it's not really shifted horizontally. I've tried to adjust with chroma shift but then it bleeds on the other side. The Chroma is very blurred, is not shifted. That's why it bleeds in every direction. I don't know which are the causes of this problem (those interest me) and how to fix this.

This is just how VHS, S-VHS, and all other consumer analog tape formats work: chroma is allotted a much lower bandwidth than luma. https://forum.videohelp.com/threads/...=1#post1937426

Scharfis_Brain's second suggestion was aimed at improving this problem.

benzio 03-23-2020 03:30 PM

Quote:

Originally Posted by msgohan (Post 67459)
This is just how VHS, S-VHS, and all other consumer analog tape formats work: chroma is allotted a much lower bandwidth than luma. https://forum.videohelp.com/threads/...=1#post1937426


I know, but a thing is a bleed of 2-4 pixels... Another thing is a bleed in every direction 8-10 pixels wide!
This cannot be explained just by chroma subsampling!

themaster1 03-23-2020 03:30 PM

@ benzio
is it normal that your script treat the source as progressive yet it's interlaced, at least for Samples problems_1.avi (haven't checked the others)
It Should be AVISource("Samples problems_1.avi").assumetff().convertToRGB32(interlaced=true)

I still prefer my script but i'm the picky kind:

Quote:

AVISource("Samples problems_1.avi")
assumetff()
ConvertToYV16(interlaced=true)
orig=last
ev=orig.assumetff().separatefields().selecteven()
od=orig.assumetff().separatefields().selectodd()
ev
ue_chroma = UToY(ev).ttempsmooth(maxr=1,lthresh=60, strength=1)
ve_chroma = VToY(ev).ttempsmooth(maxr=1,lthresh=60, strength=1)
YToUV(ue_chroma, ve_chroma)
MergeLuma(ev)
ev_filtered=last
od
uo_chroma = UToY(od).ttempsmooth(maxr=1,lthresh=60, strength=1)
vo_chroma = VToY(od).ttempsmooth(maxr=1,lthresh=60, strength=1)
YToUV(uo_chroma, vo_chroma)
MergeLuma(od)
od_filtered=last
interleave(ev_filtered,od_filtered)
assumefieldbased().assumetff().weave()
ConverttoRGB32(matrix="rec601",interlaced=true)
separatefields()
LoadVirtualDubPlugin("C:\Program Files (x86)\virtualdubmod1.5\plugins\Camcorder_Color_Den oise_sse2.vdf", "CCD", 0)
CCD(2,1)
weave()
converttoyv12(matrix="Rec601",interlaced=true)

msgohan 03-23-2020 07:39 PM

Quote:

Originally Posted by benzio (Post 67460)
I know, but a thing is a bleed of 2-4 pixels... Another thing is a bleed in every direction 8-10 pixels wide!
This cannot be explained just by chroma subsampling!

Jagabo measured 40-50 columns of chroma transitions. That would equal a smear of about 7-9 pixels in each horizontal direction from any given luma sample.

720 / 50 = ~14
720 / 40 = 18

(Strictly speaking, chroma subsampling is digital terminology.)


All times are GMT -5. The time now is 05:02 AM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.