It's a while I'm working on this project: a method of restoring video captured from VHS not by meaning of some post-production (-> introducing artifacts) but by extracting all the information that is possible from the source, capturing multiple times the same tape.

There are a few post here in this forum and in other places where this possibility is mentioned but I found any solution to be satisfactory.

Many speaks about "averaging" captures, but this is inherently wrong, because if an error is present in one single capture, by averaging the captures you spread the error in the result instead of keeping the good part.

Someone, in the other hand, correctly points out that the operation to do has to be a median, not an average.

This kind of restoration would be one of the best kind of restoration possible in principle, because its result is not "created" by any assumption like "I think that there is noise here, let's try to remove noise!" or "I think that the image is blurred... let's sharpen the image a bit!". Its result is just all the informations that can be read on the tape, net of all the accidental reading error and random fluctuations.

You should not fool yourself believing that this method will allow you to read ALL the information the tape contains, just by making more and more captures.

This method allows you, to the limit, to extract ALL the information that YOUR equipement (VCR - Wires - Capture card) can extract from a tape.

That means that this method is not worthy to be used with a bad VCR, because the result will be worse of one single capture made with a good vcr.

This method find his reason to exist when its used to push to the limit the possibilities of a VCR that is VERY GOOD already.

I have a JVC HR S9600EU and I find it perfect for this mission if it's put in EDIT MODE. I also have a good Panasonic NV HS1000 but it sharpens the image by his intrinsic nature, so I don't think that it can be the right candidate for this job.

Ok, let's say you have a VERY GOOD VCR.

Avisynth already has an external filter that does the median, frame by frame/pixel by pixel, of multiple captures to improve quality... good! So it's easy, right? Let's capture multiple times, and then cut the start and the end of the vhs with virtual dub and we have the clips to put in the median function.

No, not really.

The problem in that, the MAIN PROBLEM, is that the clips should be perfectly aligned, and in 99,99% of cases this will not happen, due to randomly dropped frames and duplicate frames.

Even one single frame of distance will make the result terrible.

It's not important that

virtualdub says that there are 0 dropped frames, or 0 dupes. There will be dropped frames! Random dropped frames can arise here and there by the capture card, by a passthrough device like the ES10, by your hard drive not fast enough etc... and

VirtualDub cannot know that. Even if you command to Virtualdub to "not drop frames nor insert null frames", there will be dropped frames.

It's simple to prove that there are indeed dropped frames even if in the statistics of your capture is written "0 dropped". Capture two times and check for the frame count of the two captures (after cutting away the part before and after the actual footage).

Obviously you also have to specify in the timing options to avoid to drop frames and insert duplicates. These are my settings:

If you see the same number of frames in the two captures you are very lucky!

If you don't see the same number (99% of cases) the clips are surely not aligned, so they can't be used in a median.

But even if you see the same number the clips could be not aligned because a dropped frame may compensate a duplicate in a capture and not in another (another 0.99% of cases).

Maybe just in 0.01% of cases two clips are natively aligned.

Having a very good SSD, closing all the softwares, antivirus etc except for virtualdub... and not touching the computer during the captures increases a lot the probability of not having dropped frames.

But to have a working median you don't have to use just two clips. You have to use at least three (or a bigger odd number). And all the clips has to be mutually aligned.

This problem has never really been faced, as far as I know.

Yes, there is someone here and there who talks about the possibility to align the captures "by hand" with an editing software.

This solution is not practical at all. It may require days to perfectly align one hour of three captures by hand, and a lot of frustration because is a tedious activity. In the first days I tried in this way and I gave up. It's too much even if you have a great motivation.

A solution to align the clip automatically with avisynth was needed to explore the possibility of the restoration by "median".

I wrote a solution that has it's limitations but it does his job very well, and I'm so much satisfied with the result that I wanted to share it with you.

The limitations are:

- The captures may have different frame-lenghts, but the difference of misalignment between the same frame in two captures has to be, at maximum, a small number (3 frames in the code I share. You can modifiy the code to search also in more distant frames, but I find 3 frames sufficient most of the time, if the captures are done in proper way).

- The alignement does not work well with the first 10-15 frames of a tape, where the VCR tries to find the best tracking and there are global luma random variations (I've tried to disable the B.E.S.T. function of the JVC but it does not give a better result)

- The algorithm is slow, but not toooo slow (The "placebo" setting of QTGMC is slower!)

but the main limitation is that

THE CODE CANNOT BE WRITTEN AS A SIMPLE USER DEFINED FUNCTION like "alignClips(clip A1, clip A2 -parameters-)".

This is because it extensively uses the runtime environment of AviSynth that cannot work within the scope of the user defined functions. So if you want to extend this code by extending it to align 5 clips instead of three you have to do it by yourself.

I first show you the possibilities of this method:

It can be used to try to recover severe damaged footage (in the first row of the video I took by mistake the already-aligned clips, so here you can see the power of the median, not the power of the alignment process):

http://www.digitalfaq.com/forum/atta...1&d=1584407980
You can have a clue about how much information is missed in a normal single capture, even with a very good vcr, by looking at the following samples:

I tried also to do 7 captures but the result is always barely noticeable, even by zooming in the pixels, in respect to 5 captures so I concluded that, in most cases, when you want to extract more information from a tape, 3 captures are enough if the videos are perfectly aligned.

This is the code:

Code:

#YOU HAVE TO SUBSTITUTE JVC1.AVI, JVC2.AVI, JVC3.AVI WITH YOUR FILENAMES
#THE CLIPS HAVE TO BE THE SAME LENGTH +-3 (Maximum difference of frame count between two clips can be 7 frames, but try always to have less than 3-4 frames of frame-count distance or it may not work).
A1 = avisource("JVC1.avi").AssumeTFF().separateFields().pointresize(720,576).convertToYv12()
A2 = avisource("JVC2.avi").AssumeTFF().separateFields().pointresize(720,576).convertToYv12()
A3 = avisource("JVC3.avi").AssumeTFF().separateFields().pointresize(720,576).convertToYv12()
#THE VIDEO NEEDS TO BE ELABORATED SEPARATING THE FIELDS BECAUSE THERE IS NO GUARANTEE THAT TWO CAPTURES CONTAINS THE SAME "FRAMES". THE "FRAMES" ARE NOT GENERATED BY THE VHS FLUX. THEY ARE CONSTRUCTED BY THE CAPTURE CARD AND SOMETIMES TWO CAPTURES HAVE SAME FIELDS IN DIFFERENT FRAMES.
#YUV12 IS NEEDED FOR THE CONDITIONALFILTER. TO NOT LOSE CHROMA INFORMATION I FIRST DOUBLE THE HEIGHT OF THE VIDEO
function edgescropped(clip A){return A.crop(50,50,620,490).grayscale()}
function myTrim(clip A, int plus){ #TO ALIGN THE VIDEO MANTAINING THE SAME LENGTH OF THE CLIP
B = A
B = (plus < 1) ? trim(A, -plus, 0) : B
B = (plus == 0) ? B : B
B = (plus == 1) ? trim(A, 0, -1)+A : B
B = (plus > 1) ? trim(A, 0, plus-1)+A : B
return B
}
#LOGIC:
#THE CLIPS A2 AND A3 ARE ALIGNED WITH THE CLIP A1.
#FOR EACH FIELD IT IS DETERMINED WHAT IS THE MOST SIMILAR TOP-FIELD TO THE CORRESPONDING TOP-FIELD IN A1, IF IS LOCATED +-3 TOP FIELDS OF DISTANCE. THE SAME IS MADE WITH BOTTOM-FIELDS.
A2Z1A = ScriptClip(A2, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A2))*256-256)")
A2Z1B = ScriptClip(myTrim(A2,2), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A2,2)))*256-256)")
A2Z1 = conditionalFilter(A2, A2, myTrim(A2,2), "lumadifference(edgescropped(A2Z1A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z1A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A2Z1B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z1B,1)), edgescropped(myTrim(A1,1)))")
A2Z2A = ScriptClip(A2Z1, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A2Z1))*256-256)")
A2Z2B = ScriptClip(myTrim(A2,-2), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A2,-2)))*256-256)")
A2Z2 = conditionalFilter(A2Z1, A2Z1, myTrim(A2,-2), "lumadifference(edgescropped(A2Z2A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z2A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A2Z2B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z2B,1)), edgescropped(myTrim(A1,1)))")
A2Z3A = ScriptClip(A2Z2, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A2Z2))*256-256)")
A2Z3B = ScriptClip(myTrim(A2,4), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A2,4)))*256-256)")
A2Z3 = conditionalFilter(A2Z2, A2Z2, myTrim(A2,4), "lumadifference(edgescropped(A2Z3A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z3A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A2Z3B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z3B,1)), edgescropped(myTrim(A1,1)))")
A2Z4A = ScriptClip(A2Z3, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A2Z3))*256-256)")
A2Z4B = ScriptClip(myTrim(A2,-4), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A2,-4)))*256-256)")
A2Z4 = conditionalFilter(A2Z3, A2Z3, myTrim(A2,-4), "lumadifference(edgescropped(A2Z4A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z4A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A2Z4B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z4B,1)), edgescropped(myTrim(A1,1)))")
A2Z5A = ScriptClip(A2Z4, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A2Z4))*256-256)")
A2Z5B = ScriptClip(myTrim(A2,6), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A2,6)))*256-256)")
A2Z5 = conditionalFilter(A2Z4, A2Z4, myTrim(A2,6), "lumadifference(edgescropped(A2Z5A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z5A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A2Z5B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z5B,1)), edgescropped(myTrim(A1,1)))")
A2Z6A = ScriptClip(A2Z5, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A2Z5))*256-256)")
A2Z6B = ScriptClip(myTrim(A2,-6), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A2,-6)))*256-256)")
A2Z6 = conditionalFilter(A2Z5, A2Z5, myTrim(A2,-6), "lumadifference(edgescropped(A2Z6A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z6A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A2Z6B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A2Z6B,1)), edgescropped(myTrim(A1,1)))")
A2_Aligned = A2Z6
A3Z1A = ScriptClip(A3, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A3))*256-256)")
A3Z1B = ScriptClip(myTrim(A3,2), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A3,2)))*256-256)")
A3Z1 = conditionalFilter(A3, A3, myTrim(A3,2), "lumadifference(edgescropped(A3Z1A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z1A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A3Z1B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z1B,1)), edgescropped(myTrim(A1,1)))")
A3Z2A = ScriptClip(A3Z1, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A3Z1))*256-256)")
A3Z2B = ScriptClip(myTrim(A3,-2), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A3,-2)))*256-256)")
A3Z2 = conditionalFilter(A3Z1, A3Z1, myTrim(A3,-2), "lumadifference(edgescropped(A3Z2A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z2A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A3Z2B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z2B,1)), edgescropped(myTrim(A1,1)))")
A3Z3A = ScriptClip(A3Z2, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A3Z2))*256-256)")
A3Z3B = ScriptClip(myTrim(A3,4), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A3,4)))*256-256)")
A3Z3 = conditionalFilter(A3Z2, A3Z2, myTrim(A3,4), "lumadifference(edgescropped(A3Z3A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z3A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A3Z3B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z3B,1)), edgescropped(myTrim(A1,1)))")
A3Z4A = ScriptClip(A3Z3, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A3Z3))*256-256)")
A3Z4B = ScriptClip(myTrim(A3,-4), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A3,-4)))*256-256)")
A3Z4 = conditionalFilter(A3Z3, A3Z3, myTrim(A3,-4), "lumadifference(edgescropped(A3Z4A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z4A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A3Z4B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z4B,1)), edgescropped(myTrim(A1,1)))")
A3Z5A = ScriptClip(A3Z4, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A3Z4))*256-256)")
A3Z5B = ScriptClip(myTrim(A3,6), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A3,6)))*256-256)")
A3Z5 = conditionalFilter(A3Z4, A3Z4, myTrim(A3,6), "lumadifference(edgescropped(A3Z5A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z5A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A3Z5B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z5B,1)), edgescropped(myTrim(A1,1)))")
A3Z6A = ScriptClip(A3Z5, "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(A3Z5))*256-256)")
A3Z6B = ScriptClip(myTrim(A3,-6), "ColorYUV(gain_y = (AverageLuma(A1)/AverageLuma(myTrim(A3,-6)))*256-256)")
A3Z6 = conditionalFilter(A3Z5, A3Z5, myTrim(A3,-6), "lumadifference(edgescropped(A3Z6A), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z6A,1)), edgescropped(myTrim(A1,1)))", "<", "lumadifference(edgescropped(A3Z6B), edgescropped(A1))+lumadifference(edgescropped(myTrim(A3Z6B,1)), edgescropped(myTrim(A1,1)))")
A3_Aligned = A3Z6
medianA = median(A1, A2_Aligned, A3_Aligned)
#YOU CAN STOP WITH medianA IF YOU WANT. NEXT PART IS NEEDED TO FIX SOME LUMA ARTIFACTS THAT COMES BY USING THE MEDIAN IN THE FIRST 10-15 FRAMES WHERE THERE ARE ABNORMAL LUMA FLUCTUATIONS
B1 = ScriptClip(A1, "ColorYUV(gain_y = (AverageLuma(medianA)/AverageLuma(A1))*256-256)")
B2 = ScriptClip(A2_Aligned, "ColorYUV(gain_y = (AverageLuma(medianA)/AverageLuma(A2_Aligned))*256-256)")
B3 = ScriptClip(A3_Aligned, "ColorYUV(gain_y = (AverageLuma(medianA)/AverageLuma(A3_Aligned))*256-256)")
medianB = median(B1, B2, B3)
medianB.converttoYuy2().pointresize(720,288).weave()

I restored several vhs with this method (a lot of hours of footage) and I never saw any bad artifact except in some spots where the tape was damaged or some parts where systematic reading errors were made (ex. the beginning of the tape).

I hope that this code will be helpful to some "purist" that want the best quality at the source for his most important footages, and could be improved by others.

Cheers!

Boulayo / Benzio