Go Back    Forum > Digital Video > Video Project Help > Restore, Filter, Improve Quality

Reply
 
LinkBack Thread Tools
  #1  
02-11-2019, 01:43 PM
willow5 willow5 is offline
Free Member
 
Join Date: Jun 2016
Posts: 137
Thanked 0 Times in 0 Posts
Hi all,

Well it has been quite a few years since I posted in this forum but in all this time, I have made some excellent progress and transferred lots of tapes onto my hard drive but, given that Vdub captures 30Gb an hour on average, I am now running out of space and need to do something with the files to make space for more captures.

I have so far captured in this configuration:

1) 8mm/Hi8 Sony EV-C400E (no line TBC) -> TBC1000 -> AIW 9600
2) VHS Panasonic NV-FS200 (with TBC on) -> TBC1000 -> AIW9600

I must stress that even with this setup, there were many inserted frames but rarely any dropped frames. How do I find where these inserted frames are given that most tapes are between 90 minutes - 120 minutes and what were the reasons for the inserted frames, was I doing something wrong ?

Also, in terms of VDub settings, I used the very handy guide on this forum to make adjustments and only really changed the contrast to 112 (from 128) whilst leaving all other levels at 128 to make the histogram ok at the top end. After that it was very much a case of batch capture and checking the histogram levels. Does this sound about right of should I have done anything differently before capturing en-masse?

So, now I need to make some restorations / filterations / general improving quality. I must stress that as we moved between 8mm and hi8 home cameras, the quality varies. Please can someone (Sanlyn / Lordsmurf ?) advise on the first steps I need to take before I can enter this stage ?

I can post a few clips if it helps to understand the quality....

Thanks

Last edited by willow5; 02-11-2019 at 01:57 PM. Reason: added more information
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #2  
02-11-2019, 05:13 PM
lordsmurf's Avatar
lordsmurf lordsmurf is online now
Site Staff | Video
 
Join Date: Dec 2002
Posts: 13,503
Thanked 2,448 Times in 2,080 Posts
Post some clips.

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
  #3  
02-11-2019, 06:47 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Quote:
Originally Posted by willow5 View Post
So, now I need to make some restorations / filterations / general improving quality. I must stress that as we moved between 8mm and hi8 home cameras, the quality varies. Please can someone (Sanlyn / Lordsmurf ?) advise on the first steps I need to take before I can enter this stage ?

I can post a few clips if it helps to understand the quality....
Yes, one can evaluate only from sample clips that are unfiltered. Samples of 8 to 10 seconds should suffice, and they should include motion or a segment showing a particular problem you want addressed. To prevent altering colorspace or other factors in your sample, open an avi in Virtualdub and edit down to a short segment. Then click "Video..." -> "direct stream copy" before saving the file.
http://www.digitalfaq.com/forum/news...ly-upload.html

There is only one way to find dropped/inserted frames: by looking for them. Avisynth does have a filter or two that could help.
Reply With Quote
  #4  
02-12-2019, 03:53 PM
willow5 willow5 is offline
Free Member
 
Join Date: Jun 2016
Posts: 137
Thanked 0 Times in 0 Posts
Thanks all - good to be back...my last thread generated a lot of Q&A entitled "Please Review My Capture Setup" so hoping that this one will too....

Anyways back to this query - I have 6 different scenarios that I need to address initially focussing only on 8mm tape transfers (VHS restoration will be addressed later). The scenarios are as follows:

1) 8mm camera recording onto 8mm tape in good lighting conditions (i.e. daylight) with movement
2) 8mm camera recording onto 8mm tape in poor lighting conditions (i.e. night time) with movement
3) Hi8 camera recording onto 8mm tape in good lighting conditions (i.e. daylight) with movement
4) Hi8 camera recording onto 8mm tape in poor lighting conditions (i.e. night time) with movement
5) Hi8 camera recording onto Hi8 tape in good lighting conditions (i.e. daylight) with movement
6) Hi8 camera recording onto Hi8 tape in poor lighting conditions (i.e. night time) with movement

In this post I will address scenario 1 and in future posts I will address scenarios 2-6. Hoping you can help with this scenario initially. Is this clip adequate or should I post another in a different setting ? Could someone kindly clean this video up for me and show me what the possibilities are ?

@Sanlyn and @Lordsmurf good to be back, how do I search for inserted frames ? What am I looking for over say a 90 to 120 minute period ?

Also is there any concept of upscaling a poor 8mm or VHS tape to 720p or 1080i or is this beyond the realms of possibility ? Finally, should I be using a Hi8 player with a line TBC function to dub my tapes ?


Attached Files
File Type: avi test clip.avi (84.21 MB, 67 downloads)
Reply With Quote
  #5  
02-16-2019, 01:36 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Thanks for the sample.

Quote:
Originally Posted by willow5 View Post
how do I search for inserted frames ? What am I looking for over say a 90 to 120 minute period ?
How many inserted frames? With interlaced video often works out to inserted fields rather than frames. In any case, an inserted image is a duplicate. How many? In my own experience over the years, I've had zero inserted frames in the main body of my captures. Rarely I've had one or two dupes in leader framers as the capture first started, but I don't start captures on important frames. You get an idea of where to look for dupes by observing the statistics update in the right-hand column of the VirtualDub capture screen. Unless you have some serious problems, you will likely seldom if ever see a duplicate during play. If you do spot them, it's during post-processing.

[EDIT] If you're feeling adventurous you can try an Avisynth plugin solution, which was just updated today. The filter itself can be found in the Mediafire links discussed in this thread: .https://forum.doom9.org/showthread.php?t=176111

Quote:
Originally Posted by willow5 View Post
Also is there any concept of upscaling a poor 8mm or VHS tape to 720p or 1080i or is this beyond the realms of possibility ?
Throughout the video processing world, once you remove all of the clueless newbies from the discussion, the overwhelming concensus about upscaling low-resolution standard definition sources is that it is an utter, complete, and absolute waste of time. Leave upscaling to your players, which can do it a lot better than you ever could with software. High definition is based on high resolution sources, not on low-rez fuzzies blown up into big blurry frame sizes.
Quote:
Originally Posted by willow5 View Post
Finally, should I be using a Hi8 player with a line TBC function to dub my tapes ?
Only with Hi8 tapes.

The first sample script is a standard way of taking an initial look at a scene. Original borders are cropped away to avoid affecting the histogram.
Code:
Crop(16,2,0,-10)
ConvertToYV12(interlaced=true)
Histogram("Levels")
The YUV histogram in the image below shows good control of YUV input levels. It also shows that overall brightness is changed by the camera's AGC -- brightness dims as the shot begins and is reduced to final levels with a quick "snap" into final level at frame 129. The color bands show a slight deficit in the U channel and a slight bias toward the V channel (the result is a yellow color cast).



I would forget about the left-border cyan damage. It will never repair satisfactorily. This and other discolorations result from tape aging and improper storage. Best to discard some of the bad pixels and re-center the image with a clean border. There are also about 4 pixels of yellow noise on the right border. Other scenes without the same damage won't have borders that exactly match those in this segment. Most scenes in other segments will have dirty borders of one kind or other and SMPTE 4:3 frames usually have most image content in only 704 of 720 pixels. The changeover of almost-similar borders during playback will be so fast and subtle that no one will notice. This sort of compromise is done all the time, especially with archival newsfilm.

Another variation: In the beginning of the shot, the large central octagonal hub has fairly bright shadow detail. As the camera zooms back, by the end of the shot the hub is darkened, with far less visible detail, and the color balance of the sky area changes several times. These are reminders of the way consumer auto "features" act less like conveniences and more like defects. Because the lens zooms back and includes more of the dark interior than in the beginning, AGC causes the brightness of the sky and its details to change several times. There is no such thing as an "anti-AGC" filter to correct this, so you simply have to live with the results.

Consumers appear to be unaware of how jittery camera motion impairs and limits the action of denoisers and other filters. Frantic motion creates interlace and motion artifacts, as well as showing how much extra bitrate is required (and wasted) by such motion in final encodes.

The script below uses an optional left-border cleanup routine with the chubbyrain2.avs and smoothUV.dll anti-rainbow filters. It's optional because another shot with different background colors under the cyan stain might be adversely affected. The GradFun2DBmod gradient smoother prevents hard edges and block noise in smooth areas in the final encode, necessary because of the fairly strong denoising required.

Color, saturation and levels tweaking were applied in VirtualDub using ColorCamcorderDenoise, ColorMill, gradation curves, and VDub's graphical Levels filter. In particular, the sliding Levels control was used to restore some brilliance to the overhead sky, while the curves filter was used as to limit super-brights to luminance-safe specular highlights in that area. Filter settings used were saved in a VirtualDub .vcf file as TestClip1_settings.vcf (attached).

Aliasing and line twitter along diagonals during motion are common problems with shutter operation in consumer cameras. This can be calmed to some extent with QTGMC and the vInverse filter. If it continues to be annoying, QTGMC can be modified to discard the alternate frame that is interpolated during deinterlace; effectively this removes 50% of the noise, as well as cutting temporal resolution in half. The result is 25fps progressive video. However, such segments can be encoded as interlaced (the encoder will imbed interlace flags), so those progressive segments can then be merged with interlaced sections in the same final video.

Using QTGMC to produce progressive video can be done using the FPSDivisor parameter. In the script posted later below, the "normal" QTGMC statement is:

[code]
QTGMC(preset="medium",EZDenoise=6,denoiser="dfttes t",ChromaMotion=true,\
ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,b order=true)[/quote]

The same statement can be modified with the FPSDivisor parameter to make 25fps progressive video:

Code:
QTGMC(preset="medium",EZDenoise=6,denoiser="dfttest",ChromaMotion=true,\
   ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,border=true,/
   FPSDivisor=2)
Below is the script I used to get the filtered 25fps interlaced result (attached as encoded "TestClip1_25i.mp4") Please note: jittery motion is seen by many temporal filters as seriously noisy. More noise = stronger settings = more cleanup work = slower filtering. Therefore this is a very slow running script, processing at about 3 fps.
Code:
Import("D:\Avisynth 2.5\plugins\chubbyrain2.avs")
Import("D:\Avisynth 2.5\plugins\RemoveDirtMC.avs")

AviSource("D:\forum\faq\willow5\D\test clip1.avi")
ColorYUV(off_u=8,off_v=-3)
ConvertToYV12(interlaced=true)
AssumeTFF()

### --- optional cubbyrain2 left-border routine --- ###
separatefields
a=last
 
a
chubbyrain2()
smoothuv(radius=7)
crop(0,0,-688,0,true)
ColorYuv(off_v=4)  #<- add some red to the new patch
b=last
 
overlay(a,b)
weave()
### --- end of optional cubbyrain2 left-border roytine --- ###

QTGMC(preset="medium",EZDenoise=6,denoiser="dfttest",ChromaMotion=true,\
   ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,border=true)
vinverse2()
BiFrost(interlaced=false)
DeHalo_Alpha(rx=2.5)
RemoveDirtMC(40,false)
GradFun2DBmod(thr=1.8)
LSFmod()
AddGrainC(1.5,1.5)
Crop(16,2,-4,-12).AddBorders(10,6,10,8)
SeparateFields().SelectEvery(4,0,3).Weave()
### --- To RGB32 for VirtualDub filters --- ###
ConvertToRGB32(interlaced=true,matrix="Rec601")
return last
Other than the chubbyrain2 routine and VirtualDub filters mentioned below, other denoisers snd chroma cleaners used were dfttest, RemoveDirtMC, and Bifrost. GradFDun2DBmod is a gradient smoother, DeHalo_Alpha cleans edge halos. LSFmod is a sharpener.

The progressive version is attached as "TestClip1_25p.mp4". Although it is physically progressive, it's encoded with interlace flags. Some external players would play it as interlaced anyway. It doesn't have as much diagonal line twitter as the 25i version, but motion isn't as smooth.

I don't know where the loud hiss and noise are coming from in your sample, but it's badly over modulated. The audio was captured at 96KHz, which is a very low sampling rate. Usually it would be 48KHz for capture.

-- merged --

Sorry, let me correct myself:

Quote:
Originally Posted by sanlyn View Post
I don't know where the loud hiss and noise are coming from in your sample, but it's badly over modulated. The audio was captured at 96KHz, which is a very low sampling rate. Usually it would be 48KHz for capture.
I should have posted:

I don't know where the loud hiss and noise are coming from in your sample, but it's badly over modulated. The audio was captured at 96KHz. Usually it would be 48KHz for capture.


Attached Images
File Type: jpg frame 129 initial lookover cropped.jpg (77.5 KB, 363 downloads)
Attached Files
File Type: vcf TestClip1_settings.vcf (3.8 KB, 20 downloads)
File Type: mp4 testclip1_25i.mp4 (6.18 MB, 29 downloads)
File Type: mp4 testclip1_25p.mp4 (6.09 MB, 40 downloads)
Reply With Quote
The following users thank sanlyn for this useful post: captainvic (02-19-2019), lordsmurf (02-16-2019)
  #6  
02-16-2019, 08:15 AM
dpalomaki dpalomaki is offline
Free Member
 
Join Date: Feb 2014
Location: VA
Posts: 1,694
Thanked 369 Times in 325 Posts
Quote:
...
Originally Posted by willow5 View Post
Finally, should I be using a Hi8 player with a line TBC function to dub my tapes ?...

Response Posted by sanlyn
Only with Hi8 tapes.

Read more: http://www.digitalfaq.com/forum/vide...#ixzz5fhbBARcE
Would standard 8mm players have s-video output like the Hi8 players have? The Sony EV-A50 for example only has composite output.
Reply With Quote
  #7  
02-16-2019, 02:27 PM
willow5 willow5 is offline
Free Member
 
Join Date: Jun 2016
Posts: 137
Thanked 0 Times in 0 Posts
@Sanlyn,

Wow thank you so much for the comprehensive answer. I don't know where to start replying to this so excuse my reply if it seems rambled as I have so many follow up questions.

Quote:
How many inserted frames? With interlaced video often works out to inserted fields rather than frames. In any case, an inserted image is a duplicate. How many? In my own experience over the years, I've had zero inserted frames in the main body of my captures. Rarely I've had one or two dupes in leader framers as the capture first started, but I don't start captures on important frames. You get an idea of where to look for dupes by observing the statistics update in the right-hand column of the VirtualDub capture screen. Unless you have some serious problems, you will likely seldom if ever see a duplicate during play. If you do spot them, it's during post-processing.
I am not sure, I think around 30 frames over a 90 minute period which seems a lot to me. Question I have is how can duplicated frames be inserted without dropping any frames ? Presumably something needs to drop to make space for a duplicated frame otherwise you end up with more frames than you started with ?

Quote:
Throughout the video processing world, once you remove all of the clueless newbies from the discussion, the overwhelming concensus about upscaling low-resolution standard definition sources is that it is an utter, complete, and absolute waste of time. Leave upscaling to your players, which can do it a lot better than you ever could with software. High definition is based on high resolution sources, not on low-rez fuzzies blown up into big blurry frame sizes.
thanks for the answer on this, I suspected as much but wanted to get it confirmed by an expert

Quote:
Finally, should I be using a Hi8 player with a line TBC function to dub my tapes ?
Only with Hi8 tapes.
Can I ask why you say only with Hi8 tapes ? What happens if you use a Hi8 player with in built TBC with a non Hi8 tape ?

Now we come onto the actual edit and your marvelous work on my clip....I have so many questions here that I honestly do not know where to start. Please assume that I am a total novice when it comes to scripts and so on so I do not really know what to do with the scripts you have provided. Is there a comprehensive list of all available scripts and uses/outputs so I can determine which ones to use in future or is it a case of asking on a case by case basis ?

Looking at this still from frame 129, what are you looking for in these colour histograms ? Also how do you produce such colour histograms ? Is it on a frame by frame basis or could you have this constantly running while the video is playing ? What does "Good" look like ?

Quote:
I would forget about the left-border cyan damage. It will never repair satisfactorily. This and other discolorations result from tape aging and improper storage. Best to discard some of the bad pixels and re-center the image with a clean border.
When you say left-border cyan damage, do you mean the "green" line that runs from the top to bottom ? If so, this line appears on all recordings from this particular (non-Hi8) camera. I believe it was a budget camera so assumed it was a camera specific artefact. Is this not the case ?

Quote:
There are also about 4 pixels of yellow noise on the right border. Other scenes without the same damage won't have borders that exactly match those in this segment. Most scenes in other segments will have dirty borders of one kind or other and SMPTE 4:3 frames usually have most image content in only 704 of 720 pixels. The changeover of almost-similar borders during playback will be so fast and subtle that no one will notice. This sort of compromise is done all the time, especially with archival newsfilm.
To the untrained eye, I am not sure what you are referring to here. Is there a still you can post to show these 4 pixels ? What is meant by "Other scenes without the same damage won't have borders that exactly match those in this segment" ? Does it mean that the tape is damaged or that the border varies according to scene/frame ? Is this a variable parameter ? What is the optimum setting therefore from a cropping point of view ? Presumably the crop cannot change on a frame by frame basis, I guess you need to choose a setting and stick with it throughout the capture ? In my case, is this the optimum crop setting that you posted earlier:

Quote:
Crop(16,2,0,-10)
Quote:
Another variation: In the beginning of the shot, the large central octagonal hub has fairly bright shadow detail. As the camera zooms back, by the end of the shot the hub is darkened, with far less visible detail, and the color balance of the sky area changes several times. These are reminders of the way consumer auto "features" act less like conveniences and more like defects. Because the lens zooms back and includes more of the dark interior than in the beginning, AGC causes the brightness of the sky and its details to change several times. There is no such thing as an "anti-AGC" filter to correct this, so you simply have to live with the results.

Consumers appear to be unaware of how jittery camera motion impairs and limits the action of denoisers and other filters. Frantic motion creates interlace and motion artifacts, as well as showing how much extra bitrate is required (and wasted) by such motion in final encodes.
Thanks for pointing this out, I noticed this too but assumed it was a camera specific feature which you have now confirmed. What can one do about the interlacing and motion artifacts you mention ? Can they be smoothed over ?

Now this is where I start getting a bit lost.....

Quote:
The script below uses an optional left-border cleanup routine with the chubbyrain2.avs and smoothUV.dll anti-rainbow filters. It's optional because another shot with different background colors under the cyan stain might be adversely affected. The GradFun2DBmod gradient smoother prevents hard edges and block noise in smooth areas in the final encode, necessary because of the fairly strong denoising required.

Color, saturation and levels tweaking were applied in VirtualDub using ColorCamcorderDenoise, ColorMill, gradation curves, and VDub's graphical Levels filter. In particular, the sliding Levels control was used to restore some brilliance to the overhead sky, while the curves filter was used as to limit super-brights to luminance-safe specular highlights in that area. Filter settings used were saved in a VirtualDub .vcf file as TestClip1_settings.vcf (attached).

Aliasing and line twitter along diagonals during motion are common problems with shutter operation in consumer cameras. This can be calmed to some extent with QTGMC and the vInverse filter. If it continues to be annoying, QTGMC can be modified to discard the alternate frame that is interpolated during deinterlace; effectively this removes 50% of the noise, as well as cutting temporal resolution in half. The result is 25fps progressive video. However, such segments can be encoded as interlaced (the encoder will imbed interlace flags), so those progressive segments can then be merged with interlaced sections in the same final video.

Using QTGMC to produce progressive video can be done using the FPSDivisor parameter. In the script posted later below, the "normal" QTGMC statement is:

[code]
QTGMC(preset="medium",EZDenoise=6,denoiser="dfttes t",ChromaMotion=true,\
ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,b order=true)

The same statement can be modified with the FPSDivisor parameter to make 25fps progressive video:

Code:

QTGMC(preset="medium",EZDenoise=6,denoiser="dfttes t",ChromaMotion=true,\
ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,b order=true,/
FPSDivisor=2)

Below is the script I used to get the filtered 25fps interlaced result (attached as encoded "TestClip1_25i.mp4") Please note: jittery motion is seen by many temporal filters as seriously noisy. More noise = stronger settings = more cleanup work = slower filtering. Therefore this is a very slow running script, processing at about 3 fps.
Code:

Import("D:\Avisynth 2.5\plugins\chubbyrain2.avs")
Import("D:\Avisynth 2.5\plugins\RemoveDirtMC.avs")

AviSource("D:\forum\faq\willow5\D\test clip1.avi")
ColorYUV(off_u=8,off_v=-3)
ConvertToYV12(interlaced=true)
AssumeTFF()

### --- optional cubbyrain2 left-border routine --- ###
separatefields
a=last

a
chubbyrain2()
smoothuv(radius=7)
crop(0,0,-688,0,true)
ColorYuv(off_v=4) #<- add some red to the new patch
b=last

overlay(a,b)
weave()
### --- end of optional cubbyrain2 left-border roytine --- ###

QTGMC(preset="medium",EZDenoise=6,denoiser="dfttes t",ChromaMotion=true,\
ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,b order=true)
vinverse2()
BiFrost(interlaced=false)
DeHalo_Alpha(rx=2.5)
RemoveDirtMC(40,false)
GradFun2DBmod(thr=1.8)
LSFmod()
AddGrainC(1.5,1.5)
Crop(16,2,-4,-12).AddBorders(10,6,10,8)
SeparateFields().SelectEvery(4,0,3).Weave()
### --- To RGB32 for VirtualDub filters --- ###
ConvertToRGB32(interlaced=true,matrix="Rec601")
return last

Other than the chubbyrain2 routine and VirtualDub filters mentioned below, other denoisers snd chroma cleaners used were dfttest, RemoveDirtMC, and Bifrost. GradFDun2DBmod is a gradient smoother, DeHalo_Alpha cleans edge halos. LSFmod is a sharpener.
There is a lot of good information here but I am at a loss on how to capitalise on it. Where do I start with filters, scripts and so on ? Also, how did you get the file size down from c.90Mb to c.8Mb ? Did running these scripts alone reduce the file size ? Please assume I am a total novice (barely mastered capturing) and need a bit of hand holding through this phase.....

Quote:
The progressive version is attached as "TestClip1_25p.mp4". Although it is physically progressive, it's encoded with interlace flags. Some external players would play it as interlaced anyway. It doesn't have as much diagonal line twitter as the 25i version, but motion isn't as smooth.
Does it mean that progressive is better than interlaced or is this down to user preference ? I assume progressive removes the interlacing artifacts ? Is this a specific filter that can be applied to change between interlacing and progressive ? Which one is more popular with PAL captures ?

Quote:
I don't know where the loud hiss and noise are coming from in your sample, but it's badly over modulated. The audio was captured at 96KHz. Usually it would be 48KHz for capture.
Yes you are right, I did not change the default Vdub settings when capturing...does it mean I need to recapture ?

Now, I have a few additional questions:

1) Should I be using a separate sound card for audio capture ? I read somewhere on this forum that I must not use the in built sound card "line in" which I have been doing so far. My motherboard is Asus P4C-800e so not sure if this has adequate chipsets for audio capture. If the answer is that I need to use a separate sound card then please can you recommend one ?
2) If I wish to splice in other footage from other camera angles to make one edited video, how best could I do this ? Is VDub the best tool or do I need dedicated video editing software ? For example, I wish to retain the audio soundtrack from Camera 1 while using footage from Camera 2 at the same timecode
2a) Following on from 2), what comes first in terms of filtering then splicing in video ? Does one filter all video from camera 1 first then filter all video from camera 2 followed by splicing in the footage together to make 1 continuous video ? The reason I ask is because there are a few wedding tapes that I wish to merge together by taking the best of both cameras and making 1 good video which can be shared with the happy couple. I must point out that 1 is "professional VHS" while my video is at best Hi8
2b) How do I add titles and text to videos both as a black or white background and on top of the video ?
3) When batch capturing, do I need to look over the video to make a note of where the inserted frames are occuring or could I do this retrospectively ? Looking at your reply here, it would appear that I need to narrow down the time window that these inserts happened. The only was I can do this going forwards is to watch over the captures as they are happening which seems time consuming. I can, however, get a list of statistics post capture from Vdub showing the number of dropped / inserted frames if this is helpful ?

That is all I can think of for now, I am sure there will be many, many more questions though - thank you Sir !

-- merged --

Sorry one more question, the 8mm tapes I have were recorded on a mono camera. Is 48KHz suitable for mono audio or both mono and stereo?


Attached Images
File Type: jpg frame 129 initial lookover cropped.jpg (77.5 KB, 5 downloads)
Reply With Quote
  #8  
02-16-2019, 02:42 PM
lordsmurf's Avatar
lordsmurf lordsmurf is online now
Site Staff | Video
 
Join Date: Dec 2002
Posts: 13,503
Thanked 2,448 Times in 2,080 Posts
Quote:
Originally Posted by dpalomaki View Post
Would standard 8mm players
I often think confusion comes from using the term 8mm whatsoever, in reference to tape formats. It's Video8, Hi8, or Digital8. There actually is no "8mm" analog tape format. 8mm is film.

And yes, I made that mistake many times in past years, and sometimes still do.
But I saw the confusion it was causing.

Much like VHS, S-VHS, W-VHS, and D-VHS, users need to specify the exact format. Each is very different.

(I'm tired of guessing, because I seem to often guess wrong. )

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
  #9  
02-18-2019, 03:51 PM
willow5 willow5 is offline
Free Member
 
Join Date: Jun 2016
Posts: 137
Thanked 0 Times in 0 Posts
Hi Sanlyn,

Regarding the over modulation and loud hiss/noise in the clip, this is actually a hotel complex and the hiss you are hearing was actually a background waterfal. I checked another part of the tape and got no background noise at this sample rate so perhaps you can review this clip and let me know if there is an issue with the audio - I am concerned that I might have to re-do all 30+ tapes as they were all set to 96KHz...


Attached Files
File Type: avi test clip 1.avi (93.18 MB, 11 downloads)
Reply With Quote
  #10  
02-18-2019, 05:48 PM
hodgey hodgey is offline
Free Member
 
Join Date: Dec 2017
Location: Norway
Posts: 1,680
Thanked 446 Times in 383 Posts
Quote:
Originally Posted by willow5 View Post
Can I ask why you say only with Hi8 tapes ? What happens if you use a Hi8 player with in built TBC with a non Hi8 tape ?
Not sure what sanlyn meant there, you would normally use the same gear to capture both, as (almost all) Hi8 devices have S-Video and better quality playback. I prefer the image from the cameras over the VCRs, but there isn't a huge difference, the cameras have a slightly sharper image, though a bit more noisy. The line-TBC in the cameras help against the image "wiggling" left and right. If you notice that on the captures, then a camera would most likely get rid of it.

Quote:
Originally Posted by willow5 View Post
When you say left-border cyan damage, do you mean the "green" line that runs from the top to bottom ? If so, this line appears on all recordings from this particular (non-Hi8) camera. I believe it was a budget camera so assumed it was a camera specific artefact. Is this not the case ?
Some of it is likely from the VCR. We got one of those VCRs here, as well as the higher-end EV-C2000, and both have this thing on playback, and captures I've seen from the top-end TBC equipped decks also has it. This seems to be an issue with PAL Sony Video8/Hi8 gear in general, the newer TBC-equipped cameras do this as well, but on the right side instead. Looking at the last clip it looks like there is some stuff on the left that may be from the original recording camera as well.

Quote:
There is a lot of good information here but I am at a loss on how to capitalise on it. Where do I start with filters, scripts and so on ? Also, how did you get the file size down from c.90Mb to c.8Mb ? Did running these scripts alone reduce the file size ? Please assume I am a total novice (barely mastered capturing) and need a bit of hand holding through this phase.....
After processing the video, one would normally encode it in a more compressed video format (typically mp4 files with the H.264 codec, or mpeg2 for DVD) for viewing. Virtualdub is able to do this.

Quote:
Does it mean that progressive is better than interlaced or is this down to user preference ? I assume progressive removes the interlacing artifacts ? Is this a specific filter that can be applied to change between interlacing and progressive ? Which one is more popular with PAL captures ?
Interlacing was a compromise solution between smoothness and image quality back when analog television was developed. It made sense for analog CRT screens, but a modern LCD panel can't display it natively. QTGMC (an avisynth plugin) is the deinterlacer of choice here. TVs (not computer monitors) often also contain built-in deinterlacing of quite good quality, provided they know what they are playing back is interlaced.

Quote:
Yes you are right, I did not change the default Vdub settings when capturing...does it mean I need to recapture ?
Some of the timing options may help with the inserted frames.

Quote:
1) Should I be using a separate sound card for audio capture ? I read somewhere on this forum that I must not use the in built sound card "line in" which I have been doing so far. My motherboard is Asus P4C-800e so not sure if this has adequate chipsets for audio capture. If the answer is that I need to use a separate sound card then please can you recommend one ?
Depends if you find the audio quality acceptable. The older onboard sound cards can some times pick up noise from the electrical components in the computer, like a buzzing hum sound and/or static.

Quote:
Sorry one more question, the 8mm tapes I have were recorded on a mono camera. Is 48KHz suitable for mono audio or both mono and stereo?
48KHz 16-bit Stereo is what you generally want to use, as it's the most common audio format on video. You won't gain anything from a higher sampling rate on these sources. 32KHz may be sufficient for what the Video8 audio can represent but it's not really worth the hassle for the small amount of space it would save. (The technical reasoning is that the sampling frequency should be twice that of the highest frequency the digital recording needs to represent + a little bit of headroom.)

Last edited by hodgey; 02-18-2019 at 05:58 PM.
Reply With Quote
  #11  
02-18-2019, 06:25 PM
dpalomaki dpalomaki is offline
Free Member
 
Join Date: Feb 2014
Location: VA
Posts: 1,694
Thanked 369 Times in 325 Posts
Worth noting that some Hi8 VCR include TBC. And in general the full fledged VCRs have better (faster) tape handling. But Hi8 VCRs are pricey compared to high end used camcorders.

96 kHz audio has little to no advantage for capture from consumer analog video tape. 48kHz 16-bit is more than adequate for capture of home video and is typical for distribution on DVD media. But there is no reason to recapture, just down-sample when the time comes. As noted, motherboard embedded audio has improved over the years. If you are happy with it, use it. Most handicam audio was pretty bad using poor microphones and having high noise floors. Mono audio is ok if that is how it was shot, converting to stereo just uses more bytes.

DVD is interlaced. Ultimately it will depend on your distribution format (s).
Reply With Quote
  #12  
02-19-2019, 10:36 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Quote:
Originally Posted by willow5 View Post
Quote:
How many inserted frames? With interlaced video often works out to inserted fields rather than frames.
In any case, an inserted image is a duplicate. How many? In my own experience over the years, I've had
zero inserted frames in the main body of my captures. Rarely I've had one or two dupes in leader
frames as the capture first started, but I don't start captures on important frames. You get an idea
of where to look for dupes by observing the statistics update in the right-hand column of the
VirtualDub capture screen. Unless you have some serious problems, you will likely seldom if ever see a
duplicate during play. If you do spot them, it's during post-processing.
I am not sure, I think around 30 frames over a 90 minute period which seems a lot to me. Question I
have is how can duplicated frames be inserted without dropping any frames ? Presumably something needs
to drop to make space for a duplicated frame otherwise you end up with more frames than you started
with ?
An inserted duplicate field or frame takes the place of a field or frame that never made it into the capture. In other words, an inserted frame replaces one that should have been present but isn't. When frames are simply dropped but not replaced, you have a gap in the video's flow.

Quote:
Originally Posted by willow5 View Post
Please assume that I am a total novice when it comes to scripts and so on so I do not really know what
to do with the scripts you have provided. Is there a comprehensive list of all available scripts and
uses/outputs so I can determine which ones to use in future or is it a case of asking on a case by case
basis ?
An Avisynth script is a plain text file typed in Notepad or another text editor and saved with an ".avs" file extension. You can use VirtualDub to run Avisynth scripts. Click "File..." -> "Open video file...", locate the .avs script, and click OK. VirtualDub sees Avisynth output as a video stream, the same way it "sees' an .avi or other compatible video. The output of the script is saved by VirtualDub using the colorspace and codec you specify, the same way it saves any other output file. You can also apply VirtualDub filters to Avisynth's output the same way you apply VDub filters to any other video that you open in VirtuaDub -- although things might run slightly slower because you will be applying two processing steps at once, Avisynth's filters and VirtualDub's. Unless you specify how you want the output formatted and saved, Avisynth's processing is completely un-encoded and uncompressed AVI.

There is no such thing as a comprehensive list of Avisynth scripts. A script is simply a list of instructions that are executed line-by-line in the sequence in which they appear. Certain instructions are common to all scripts -- for instance, you must have an instruction that locates and opens a video file before the script can do anything else. Otherwise every script is a custom job based on the video at hand. Some videos require very little fixup, others require a lot.

You learn about Avisynth by seeing what others have done with it. While filter documentation and discussion groups cover a lot of territory when it comes to Avisynth filters, what they can do, and how they are used, you can't work without sampling how others have used it for video. The same is true of VirtualDub: could you have figured out a cropping operation without the guide you referenced earlier? Would you know what to do with VirtualDub's GUI-based Levels filter? It was used in the thread you're reading right now, and it's been illustrated and discussed in specific detail in previous project threads. Cruising through the restoration forum is one way to find out what's going on, and it's the way most members here learned what they know.
Postprocessing video using AviSynth
Postprocessing video using VirtualDub
Using VDub's CamcorderColorDenoise, GUI Levels, and others: Information Overlaod #3.

Quote:
Originally Posted by willow5 View Post
Looking at this still from frame 129, what are you looking for in these colour histograms ? Also how do
you produce such colour histograms ?
The first thing I looked for was to see that luminance levels (in the white bar at the top of the graph) did not exceed y=16-235. The darker shaded borders on each side of the histogram indicate data that is darker than y=16 (the left border) and data that is brighter than y=235 (the right-hand border). The line down the middle of the graph indicates the middle point of the 16-235 spectrum (or about y=128).
How is it produced? I guess I'll have to repeat from the previous post:
Quote:
Originally Posted by sanlyn View Post
The first sample script is a standard way of taking an initial look at a scene. Original borders are cropped away to avoid affecting the histogram.
Code:
Crop(16,2,0,-10)
ConvertToYV12(interlaced=true)
Histogram("Levels")
Quote:
Originally Posted by willow5 View Post
Is it on a frame by frame basis or could you have this constantly running while
the video is playing ? What does "Good" look like ?
The short script quoted above will give you a histogram on every frame in your video. if your video has 1 frame, you'll get 1 histogram. if your video has 225,000 frames, you'll get 225,000 histograms, one on each frame.
Yes, it will stay there until you deactivate or erase those lines in the script.

There is no such thing as "good'. Histograms are just information. If your histograms show that you have huge large big high peaky lumps in the blue band and you have a short stubby red band you can bet your life that your video looks mostly blue with very little red -- unless, of course, it's a picture of a small colored ball in a big blue sky, in which case you want a lot of blue and a little bit of the other colors.

The type of histogram shown in this current thread is a "parade" type, or a column of horizontal colored bands that stretch from left to right. There are other forms.
Understanding histograms Part 1 and Part 2, for cameras and video:
http://www.cambridgeincolour.com/tut...istograms1.htm
http://www.cambridgeincolour.com/tut...istograms2.htm
Avisynth Histogram functions: http://avisynth.nl/index.php/Histogram

Quote:
Originally Posted by willow5 View Post
Quote:
I would forget about the left-border cyan damage. It will never repair satisfactorily, although the script managed to clean most of the blurred cyan smear. This and other discolorations result from tape aging and improper storage, as well as VCR playback peculiarities. Best to discard some of the bad pixels and re-center the image with a clean border.
When you say left-border cyan damage, do you mean the "green" line that runs from the top to bottom ? If so, this line appears on all recordings from this particular (non-Hi8) camera. I believe it was a budget camera so assumed it was a camera specific artefact. Is this not the case ?
Are you sure it's just a green line? It looks like a cyan line inside a vertical cyan stain about 18 pixels wide. When you get into color correction you have to be more precise. The average RGB values of that area were Green-189 + Blue 174, which is cyan with a slight green bias. Green is a primary color. Cyan is a secondary color made up of green and blue. If you had to correct a color imbalance, the correction for green wouldn't be the same as a correction for cyan.

If this discoloration happens on all tapes, you have a problem to deal with. I've had some border stains now and then, but not all alike, not on all tapes, and not on all players. I've used high-end players from Panasonic and JVC, and strains just seem to show up whenever they feel like it. Often they go away when I switch players.

Quote:
Originally Posted by willow5 View Post
Quote:
There are also about 4 pixels of yellow noise on the right border. Other scenes without the same damage won't have borders that exactly match those in this segment. Most scenes in other segments will have dirty borders of one kind or other and SMPTE 4:3 frames usually have most image content in only 704 of 720 pixels. The changeover of almost-similar borders during playback will be so fast and subtle that no one will notice. This sort of compromise is done all the time, especially with archival newsfilm.
To the untrained eye, I am not sure what you are referring to here. Is there a still you can post to show these 4 pixels ?
frame 0 top field lower right corner


Quote:
Originally Posted by willow5 View Post
What is meant by "Other scenes without the same damage won't have borders that exactly match those in this segment" ? Does it mean that the tape is damaged or that the border varies according to scene/frame ? Is this a variable parameter ? What is the optimum setting therefore from a cropping point of view ? Presumably the crop cannot change on a frame by frame basis, I guess you need to choose a setting and stick with it throughout the capture ? In my case, is this the optimum crop setting that you posted earlier:
Quote:
Crop(16,2,0,-10)
Video is generally post-processed in segments, with filters and other procedures to suit the particular segment. if you capture 7 tapes and all 7 videos in their entirety have different border thickness, you're welcome to crop a big swatch to suit all of them and throw away some screen real estate on some videos, or you can crop to suit each case and end up with slightly different border thicknesses. They are black borders that will disappear against the display's black background, so no one will notice small differences. It's up to you.

Quote:
Originally Posted by willow5 View Post
Quote:
Another variation: In the beginning of the shot, the large central octagonal hub has fairly bright shadow detail. As the camera zooms back, by the end of the shot the hub is darkened, with far less visible detail, and the color balance of the sky area changes several times. These are reminders of the way consumer auto "features" act less like conveniences and more like defects. Because the lens zooms back and includes more of the dark interior than in the beginning, AGC causes the brightness of the sky and its details to change several times. There is no such thing as an "anti-AGC" filter to correct this, so you simply have to live with the results.

Consumers appear to be unaware of how jittery camera motion impairs and limits the action of denoisers and other filters. Frantic motion creates interlace and motion artifacts, as well as showing how much extra bitrate is required (and wasted) by such motion in final encodes.
Thanks for pointing this out, I noticed this too but assumed it was a camera specific feature which you have now confirmed. What can one do about the interlacing and motion artifacts you mention ? Can they be smoothed over ?
Smoothing some fuzzy edges and aliasing was demonstrated in the script and in the two videos. For excessive interlace combing the vInverse filter was used in the script. QTGMC itself at certain settings does correct some motion shimmer and various types of grainy noise. The Santiag filter was used to correct some mild but visible aliasing and edge twitter on diagonal forms in the upper right of the video during motion, while the more extreme case was discarding alternate interlaced fields for the progressive version.

Quote:
Originally Posted by willow5 View Post
Where do I start with filters, scripts and so on ? Also, how did you get the file size down from c.90Mb to c.8Mb ? Did running these scripts alone reduce the file size ? Please assume I am a total novice (barely mastered capturing) and need a bit of hand holding through this phase.....
I don't know which file size you refer to as 90mb. Your sample was unencoded losslessly compressed YUY2 intraframe video at about 87mb with uncompressed PCM audio, which is generally correct for analog source capture. The mp4 samples were lossy encoded Yv12 interframe video with highly compressed ACM audio at about 6.5mb.

In Unencoded or losslessly compressed intra-frame video using lossless codecs such as Huffyuv or Lagarith, each frame is a complete image in itself. Because every frame in an intraframe video is a complete image, intraframe videos are larger than lossy encodes. The lossless working files were saved as Yv12 using the Lagarith lossless codec, ready for final encoding elsewhere (huffyuv cannot be used with YV12).

With lossy interframe encoding such as DVD (MPEG) or h.264, frames are arranged in GOP's (Groups of Pictures). Each GOP consists of one or a few complete images called key frames, while the rest of the frames are incomplete images that contain data only for the changes that occur between key frames and between preceding and following GOP's. Because most frames in an interframe video are incomplete images that contain only the changes between key frames, and because lossy compression also discards some data that the codec considers unimportant, and because MPEG and h.264 use only 50% of the chroma resolution of YUY2 work files and only 25% of the chroma resolution of RGB work files, intra-frame files are much smaller. The differences between lossless and lossy video explains why lossy final delivery formats like MPEG and .h264 are not recommended for capture or for post processing work. They are called "final delivery" because they are not designed for further modification without serious quality loss. The mp4's were encoded using h.264 with TMPGenc Mastering Works.
Intraframe vs. Interframe Compression

Quote:
Originally Posted by willow5 View Post
Quote:
The progressive version is attached as "TestClip1_25p.mp4". Although it is physically progressive, it's encoded with interlace flags. Some external players would play it as interlaced anyway. It doesn't have as much diagonal line twitter as the 25i version, but motion isn't as smooth.
Does it mean that progressive is better than interlaced or is this down to user preference ? I assume progressive removes the interlacing artifacts ? Is this a specific filter that can be applied to change between interlacing and progressive ? Which one is more popular with PAL captures ?
Deinterlacing is a destructive process. The quality loss from minimal to visible depends on how it's done. Because of this, interlaced video should remain interlaced. There are various means of filtering interlaced media without deinterlacing, but sometimes deinterlacing is required for various reasons, such as repair requirements (would you rather deinterlace and remove the defects, or keep it interlaced and live with defects such as duplicate or missing frames, or frames with badly corrupt data?), or for web mounting and other special purpose players. Sometimes interlacing is required, as with DVD and certain Bluray formats.

If you have two video sources, one interlaced and one progressive, both at the same frame rate, you cannot mix the two formats in the same video. The fix is to encode the progressive vid with imbedded interlace flags, which instructs the player to consider everything as interlaced. This is also done for purely progressive formats such as animation, silent film, or Hollywood originals which are usually created as 15 or 18 fps, 23.97 or 24 fps, or other progressive frame rates which are doctored in various ways (repeated frames, telecine or pulldown effects, etc.) to make them play at 29.97 fps. Many PAL versions of 23.97 or 24fps fps movies are simply speeded up to 25 fps and encoded as interlaced for DVD (Movie fans really despise this!).

Interlaced video at 29.97 fps plays at 59.94 fields per second. When it's deinterlaced and all fields are retained and interpolated into full-sized frames, it runs at 59.94 full frames per second. The same methods are used for the PAL 25 fps standard. This "normal" deinterlace method doubles the frame rate, doubles the number of frames, and makes a larger file. Some broadcast sources and some players can't handle 59.94 or 50 full frames per second. For some media there are compromises where video is deinterlaced and alternate fields are dropped, keeping the final progressive output at the original frame rate -- but that's a cost of 50% of the original temporal resolution, so in most cases motion will not play as smoothly.

These days, QTGMC is the prime deinterlacer. QTGMC is an Avisynth plugin. Next in quality is your media player or TV. Below that but still usable are various implementations of the yadif algorithm. The lowest quality are simple bob filters available in many editors and are usually used for quick testing. Bigtimne Professional software used by Disney et al uses other methods, but QTGMC is also used inn studios. Note that "big time professional' doesn't incluide anything you can buy online or at Amazon, including Adobe or Vegas. You'll have to spend plenty of money and invest years of training to use what Hollywood uses.

There is no such thing as "best" between interlaced or progressive. All depends on how the media is processed and played.

Quote:
Originally Posted by willow5 View Post
2) If I wish to splice in other footage from other camera angles to make one edited video, how best could I do this ? Is VDub the best tool or do I need dedicated video editing software ? For example, I wish to retain the audio soundtrack from Camera 1 while using footage from Camera 2 at the same timecode
2a) Following on from 2), what comes first in terms of filtering then splicing in video ? Does one filter all video from camera 1 first then filter all video from camera 2 followed by splicing in the footage together to make 1 continuous video ? The reason I ask is because there are a few wedding tapes that I wish to merge together by taking the best of both cameras and making 1 good video which can be shared with the happy couple. I must point out that 1 is "professional VHS" while my video is at best Hi8
2b) How do I add titles and text to videos both as a black or white background and on top of the video ?
You apparently have a lot of catching up to do. Some editing and cut-and-join work can be done in Avisynth and VirtualDub, but for timelne, special effects, audio overlay, etc., you need a retail editor. Watch out for totally free editors; they're usually short on features and they assume that you already know what you're doing, which is apparently not the case. Don't invest in high-priced mammoth software packages at this point. We find that Corel's Visual Studio Pro is perfectly suited for what you describe above, including special effects, sound and title overlays, and output for the web. DVD, BluRay, and AVCHD. It's easier to learn and to use than the mammoth software bloat from Adobe.

For post-processing and repair, remember that editors are editors, period. They are not restoration apps. For the kind of cleanup you see here and in other restoration threads, you need Avisynth and VirtualDub. NLE editors have nothing even vaguely approaching the cleanup and repair abilities or precision operations of Avisynth and VirtualDub.

Quote:
Originally Posted by willow5 View Post
3) When batch capturing, do I need to look over the video to make a note of where the inserted frames are occuring or could I do this retrospectively ? Looking at your reply here, it would appear that I need to narrow down the time window that these inserts happened. The only was I can do this going forwards is to watch over the captures as they are happening which seems time consuming. I can, however, get a list of statistics post capture from Vdub showing the number of dropped / inserted frames if this is helpful ?
True, you have to pay attention to captures while you're capturing. You described an inserted frame rate of one every 3 minutes, but dupes generally occur in groups, not in regular cycles. There are Avisynth filters that can be used in various ways, but you're better off finding out why you have inserted frames to begin with. They are usually the result of system bottlenecks (capturing to the same drive or partition that contains the operating system, playing the audio track while capturing, or some other system glitch).

Filters and process functions: Avisynth has over 1000 builtin filters and functions, with several hundred external filters posted on the internet. In no way will you ever need to know about all of them. Virtualdub has dozens of builtin filters, with a couple of hundred available online. Avisynth plugins are kept in its program folder in a plugins subfolder, and VirtualDub also has its own plugin subfolder.

Avisynth internal filters: http://www.avisynth.nl/index.php?tit...nalignedSplice. This list doesn't include several hundred builtin special programming functions.

Avisynth external plugins (one of several sources): http://avisynth.nl/index.php/Externa...s#Introduction.

VirtualDub builtin filters are visible in its filter dialog window.
Popular source of VirtualDub external plugins: http://www.infognition.com/VirtualDubFilters/

Many digitalfaq threads have updated filter posts and special packages for popular plugins. The plugin package most often requested is for QTGMC and is several support files. The .zip package has the plugins, documentations, Windows system support files and VC++ support file links, and complete instructions. The Updated QTGMC filter package 21-21-2018.

Just a few days ago there were forum posts with links to special restoration samples and projects. There were 6 or 7 projects featured, with links in post #10 and post #11 in an earlier thread.

There is a detailed workflow discussion + illustrations + script details and filter links in the thread "Restoring a bunch of VHS workout videos?", post #25 () and post #26.

There are before-after demo images, with workflow, filter, and script details in another thread in post #20.

There are many, many more. All you have to do is what the rest of have done: browse forum posts that tackle problems, illustrating how it's done. Don't neglect to post your own samples and questions. There is no better way to learn video.


Attached Images
File Type: png frame 0 top field lower right corner.png (318.7 KB, 319 downloads)

Last edited by sanlyn; 02-19-2019 at 10:46 PM.
Reply With Quote
The following users thank sanlyn for this useful post: captainvic (03-12-2019), willow5 (03-15-2019)
  #13  
02-21-2019, 01:31 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
@willow5, Sorry for the delay, but I'm working up more detailed materials to get you started. Family medical is slowing things down, but will be posting in a day or two.
Reply With Quote
  #14  
02-22-2019, 03:32 PM
willow5 willow5 is offline
Free Member
 
Join Date: Jun 2016
Posts: 137
Thanked 0 Times in 0 Posts
Thanks Sanlyn, hope everything is ok at home
Reply With Quote
  #15  
03-11-2019, 05:36 AM
willow5 willow5 is offline
Free Member
 
Join Date: Jun 2016
Posts: 137
Thanked 0 Times in 0 Posts
Hi Sanlyn, are you now able to provide some further material?
Reply With Quote
  #16  
03-11-2019, 09:12 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Sorry, willow5, just spent a few hours last night updating some old links before I post them. Keeping up, with this forum is being constantly interrupted these days with kids' medical issues and moving 740 miles to a new house. But I didn't forget you. Will be back soon.
Reply With Quote
  #17  
03-12-2019, 10:21 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Quote:
Originally Posted by willow5 View Post
There is a lot of good information here but I am at a loss on how to capitalise on it. Where do I start with filters, scripts and so on ? Also, how did you get the file size down from c.90Mb to c.8Mb ? Did running these scripts alone reduce the file size ? Please assume I am a total novice (barely mastered capturing) and need a bit of hand holding through this phase.....
That's understandable. However, your questions revealed that you're still in a very basic learning stage, so my first stream of answers and those of other readers were intended to give you some sort of starting point.

Fire up your coffee maker or teapot (or both), get comfy, put on your patience cap, and slow down....

This time around I'll cover details of the script in post #5. But understand that copying a script or list of filters for one troublesome shot in a long video won't provide all the answers or filters for shots that need less work and shots that need more. Home camera videos are especially trying: hectic camera motion, poor exposures, mixed lighting, blown-out highlights and crushed darks -- it's not nearly as easily handled as retail video (although I've had some retail tapes that make you wonder what kind of demons mastered them!).

You mentioned that you have other sample captures from video shot under different conditions. Obviously different conditions or different problems will call for many filter variations. It would be a good idea to upload some of those videos so that you can get a handle on different process options.

The version of Avisynth that's best for beginners is 32-bit v2.6 dated May 2015. It can be downloaded as Avisynth_260.exe at https://www.videohelp.com/download/AviSynth_260.exe. There are many other versions and revisions that you can try at your own peril, but this one has worked steadily for most users since 2015.

The .exe will create an Avisynth program folder whose default location in 32-bit Windows is the folder "C:\Program Files". In 64-bit systems the default location is "C:\Program Files (x86)". You can tell the installer to create that Avisynth program folder on any drive or partition you want. The only configuration you need to set in the install dialog is to tell Avisynth to associate .avs files with Notepad, so that when you double-click an .avs file Windows will open it in Notepad for you. The installer will load one or two dll's into your Windows system area and will create a few registry entries.

Windows will create a program group in your program listings containing links to online documentation , the plugins folder, internet links, and an uninstaller. All of that stuff is located in the Avisynth program folder and subfolders. BTW, running the uninstaller simply removes the Avisyntyh system dll and the registry entries. Your Avisynth folder will remain intact, along with all its plugins.

One thing that does not get installed is anything that looks like "avisynth.exe" or similar. There is no executable. You type an .avs script and run it in Virtualdub or a klutzy app called AvsPMod. Avisynth Help suggests that you can run scripts in Windows Media Player. Don't do it. WMP is a crippled app these days. VirtualDub is much easier.

Avisynth external plugins usually come as zip'd files, often with extra documentation and other materials. Sometimes you just get the plugin. There's usually extra info somewhere in the Avisinth wiki -- just go to Google, enter "Avisynth" plus the name of a plugin or function, and you'll get more than you bargained for. Try "Avisynth getting started" or "Avisynth Guides" for starters.

How to download Avisynth plugins: Never download the plugin package directly to your plugins folder. In a very short time your folder will be filled with junk and will no longer function. Create a new folder somewhere on a hard drive. Call it "Avisynth Plugins" or "Plugins Storage" or whatever. Then, for each new plugin that you download or for any docs or articles that you want to add for it, create a subfolder with that plugin's name on it. Download the plugin or package to that subfolder. You'll always know where that material is located. Unzip the package if necessary, and examine the contents.

Avisynth plugins come in three flavors: compiled ".dlls", ".avsi" script files, and ".avs" script files. A .dll plugin and an .avsi plugin load automatically into memory when your script calls for it. But an .avs has to be imported explicitly into your own script using the Import() command. Why the three types? There's no time for that here, but let's say that the programmers and designers have their own reasons, especially when multiple revisions of the same filter exist and the versions contain much of the same text code that can't be auto-imported by all of the versions at once.

The only files you need to copy into your plugins folder are .dll, .avsi, or .avs. On rare occasions a Windows system support dll might be required: you'll be furnished with Microsoft links that handle the whole thing for you, no problem. But Windows system dll's are never installed as Avisynth plugins.

VirtualDub filters have a .vdf file extension. Handle these downloads the same way you manage Avisynth plugins, using a master folder and subfolders, and copy the .vdf into your Virtualdub plugins.

Use 32-bit plugins for 32-bit Avisynth and VirtualDub. You can go full 64-bit if you want -- just don't complain when you find out how many great 32-bit plugins don't have 64-bit versions. And until you find out more about what you're doing, keeping up two versions of everything is a headache you don't need now.

Zip'd packages use pkzip for compression and are posted as .zip files. Windows 7 and later have builtin unzip capability. Other packages come as .7z files, which is 7zip's format. 7Zip has an excellent free utility for 7z files. There are two download links (32-bit and 64-bit) just below the top of 7Zip's main page at https://www.7-zip.org/.

If you haven't done so, download the linked QTGMC .zip package mentioned earlier(). you'll find complete instructions in that download along with some useful filters and documentation that will come in very handy later. You'll also have a good hands-on start with one of Avisynth's most popular multi-function filters.

Now for line-by-line details on the Avisynth script in post #5 :

Import("D:\Avisynth 2.5\plugins\chubbyrain2.avs")
Import("D:\Avisynth 2.5\plugins\RemoveDirtMC.avs")

Change the path name to the path for the plugins' location in your system.
The first two lines are Import() functions that during runtime will copy the text of the named .avs plugins into your running code. As noted earlier, plugins are often published as .avs scripts because there can be so many versions of the same filter with only minor variations. RemoveDirtMC contains multiple versions of RemoveDirt's functions -- if you uploaded all of them automatically with a .dll or ,avsi, Avisynth would get confused. And so would you. You won't notice it yet, but one of these lines contains a minor error. I'll point it out later in the notes that follow.
Import(): http://avisynth.nl/index.php/Internal_functions#Import.

AviSource("D:\forum\faq\willow5\D\test clip1.avi")
AviSource is the most versatile function for opening and decoding .avi source videos. This function finds and uses the codecs installed in your system. There are many other file opening functions and utilities for other video and audio formats.
AviSource(): http://avisynth.nl/index.php/AviSource

ColorYUV(off_u=8,off_v=-3)
ColorYUV is a multi-function command that works with YUV color and is very versatile. In this case, "off_u=8" raises the color value of all U-channel (blue-yellow) pixels by 8 points, i.e, basically it brightens blue chroma. "Off_v=-3" decreases v-channel offset by 3 points, i.e, basically it decreases the amount of red chroma. This has no effect on any RGB corrections you might later make in VirtualDub. It's simply a quick initial color balance step. Avisynth's Help describes ColorYUV's functions, but to get into depth with color correction you can find great resources in internet tutorials for Photoshop and Adobe Premiere or you can study forum posts to see how others handle color.
ColorYUV(): http://avisynth.nl/index.php/ColorYUV

ConvertToYV12(interlaced=true)
AssumeTFF()

All color space conversion functions in Avisynth use high-precision algorithms. Avisynth makes these conversions as cleanly as any pro app and can do it cleaner than most of the others -- but to take advantage of that precision you must tell Avisynth the interlaced state of the video. Many big-name NLE's go amiss here. At this point your video is still interlaced. YV12 will be needed for most of the filters that follow.

Avisynth assumes that all video is Bottom Field First (BFF). AssumeTFF() overrides that assumption and tells Avisynth that your video is Top Field First (TFF), which is true of most interlaced or telecined analog video except for the old consumer DV format. You can check the field order of your videos to make sure. See neuron2's classic html faq, Neuron2_How To Analyze Video Frame Structure.zip.
Convert(): http://avisynth.nl/index.php/Convert
AssumeTFF(): http://avisynth.nl/index.php/Parity

### --- optional chubbyrain2 left-border routine --- ###
separatefields()
a=last

As stated, the routine that follows the comment statement is optional and can be removed. But your side border stains won't be filtered, which is what this routine tries to do. (A comment always begins with the "#" character and is never executed).
SeparateFields extracts all of the half-height interlaced fields in the video and builds a continuous stream of them. You can think of this process as a kind of non-destructive deinterlace, although it isn't technical deinterlace and the field half-size property isn't changed. This is being done because the filters that follow specify that they won't work properly while the video still consists of frame-woven interlaced fields.

a=last assigns a place in memory that I have arbitrarily named "a". This "a" will be the name of the video clip as it existed at the "last" time anything was done to it -- and the last thing that was done to it was the conversion to a stream of separated fields. So that stream becomes a clip named "a". You can invent entities and name them anything you want, as long as the names aren't the same as a function or filter. Avisynth will remember that "a" is the newly created stream of fields at the time the fields were separated. You'll find the term "last" used often in Avisynth; its meaning is 'the last step that was completed".
SeparateFields(): http://avisynth.nl/index.php/SeparateFields


a
chubbyrain2()
smoothuv(radius=7)
crop(0,0,-688,0,true)
ColorYuv(off_v=4) #<- add some red to the new patch
b=last

The is the main stain filter routine. By mentioning the term "a" by itself the way it's shown here, I'm bringing execution focus to the "a" clip created earlier. Filtering that follows will apply to that video stream named "a". But watch what happens later to the results of that filtering in the last line of the above group of statements.
chubbyrain2() is an anti-rainbow filter designed to smooth out unwanted discolorations. It's followed by another chroma cleaner or smoother, smoothuv(radius=7). The radius parameter is a strength setting that looks at a series of frames and tries to decide when color streams encounter unwanted disruptions (that is, it reads variations in chroma noise). ColorYuv(off_v=4) then supplies an offset that helps to reduce some of the red tinge in the filtered area.

After those three cleaning steps are executed, crop(0,0,-688,0,true) crops away 688 pixels off of the cleaned video's right-hand side that are no longer needed. The part that remains is the newly filtered left-hand 32 pixels, which formerly contained the original stained area that has now been at least partially cleaned. This narrow strip of cleaned video is what we want to use later.

What do we do with that 32-pixel strip of cleaned video? We save it as a clip named "b". So the script has now created two new video clips, one named "a" and one named "b". Because everything we did with "a" has now been saved as "b", the original unfiltered "a" still exists because anything we changed about "a" has now been assigned as the new clip named "b".

overlay(a,b)
weave()
### --- end of optional chubbyrain2 left-border routine --- ###

Now you know why we saved the original "a" clip and the thin 32-pixel clip under two different names. overlay(a,b) overlays the original stained area of "a" with 32 filtered pixels from all the matching "b" frames, which covers most of the original stained border. Then the newly-created video of cleaned-up fields is re-woven back into a properly interlaced video in their original frame-and-field sequence with the Weave() function.

chubbyrain2.avs is an old favorite avs plugin but is now posted in the Avisynth wiki as chubbyrain2.avsi. What? Didn't we say that .avsi files were imported automatically and didn't need an Import() statement? Well, yes, that's true. By including that import statement at the very beginning of the script, the script would override the automatic loading of the .avsi plugin and would instead load only the .avs specified in the statement. Why did I do that and why am I still using the older .avs version? I use the old .avs version because the new .avsi version doesn't make nice with older scripts. It isn't that the logic has changed, it's that the logic is now written in a way that won't jive with older scripts that used the older support files.

Since you don't have any of the older scripts that I use, I suggest that you download the newer chubbyrain2.avsi from http://avisynth.nl/index.php/ChubbyRain2. However....if you look at older scripts posted in this and other forums, you'll see "chubbyrain2.avs" posted in those scripts. Just be aware of it. You can have both an older .avs version and a newer .avsi version in your plugins. The .avsi version will load automatically when chubbyrain2 is required, unless you first load the old .avs version with your Import() statement. So that "error" I mentioned is really only a minor one, and only because it really wasn't required. But if you keep that Import() statement in a script and a chubbyrain2.avs file doesn't actually exist, you'll get a runtime error. Therefore, the original chubbyrain2.avs is attached to this post as "chubbyrain2.avs".

Note that chubbyrain2 also requires three support filters. The first filter it requires is Bifrost.dll, previously posted to digitlfaq as Bifrost_v2.zip.
The second support file required by chubbyrain2 is the old cnr2 chroma cleaner, posted as cnr2_v261.zip.
The third support file required by chubbyrain2 is Masktools2. That's a really big filter used by many other plugins. You will find it in the QTGMC plugins package mentioned earlier (See? I told you QTGMC would be important, even if you don't use QTGMC itself).

SmoothUV.dll is another chroma noise cleaner. The current version is still in the old zip package smoothuv_dll_20030902.zip at http://avisynth.nl/index.php/SmoothUV.

Crop(): http://avisynth.nl/index.php/Crop
Overlay(): http://avisynth.nl/index.php/Overlay
Weave(): http://avisynth.nl/index.php/Weave

QTGMC(preset="medium",EZDenoise=6,denoiser="dfttes t",ChromaMotion=true,\
ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,b order=true)
vinverse2()
Congratulations, you made it to the feature event, which is the combination of deinterlacing (required here because of the nature of the cleanup), chroma denoising, general denoising, and motion and shimmer smoothing. QTGMC does all that and is used here at a fairly midstream strength. Its preset parameter of "medium" sets values for several dozen hidden operations. Plus, I'm upgrading some of those parameters and adding one or two to give QTGMC more work. Denoising is augmented to a value of 6 and dfttest is specified (it's also a well-known noise smoother in its own right, and it comes with the QTGMC package), ChromaNoise and ChromaMotion filtering are turned on along with Motion compensation (DenoiseMC). A small portion of the initial grain structure will be restored to help prevent an over filtered look. Border=true uses special resizing to avoid edge border twitter when the full-size progressive frames are interpolated. Then vInverse2() is used to smooth excess interlace combing effects. The result of this deinterlacing is the default 50fps progressive video.

QTGMC is in the QTGMC.zip package mentioned earlier.
vInverse2is in vinverse-x86.zip, the 32-bit version athttp://avisynth.nl/index.php/Vinverse. It requires the Microsoft VisualC++ 2012 runtime, which is installed when you set up the QTGMC.zip package.

BiFrost(interlaced=false)
DeHalo_Alpha(rx=2.5)

I tried giving that stain one more swipe with BiFrost, the same filter discussed earlier. That stain is one persistent beast. Every little bit helps.
DeHalo_Alpha(rx=2.5) is an anti-halo filter for cleaning up oversharpening edge halo artifacts. They are so common with home video tape you can just as well expect it as normal behavior. The "rx=-2.5" setting slightly increases its default halo thickness detection. DeHalo_alpha has been posted as DeHaloALpha.zip.
It requires MaskTools2 and RemoveGrain (part of RGTools), all of which are in the QTGMC package.

RemoveDirtMC(40,false)
RemoveDirtMC is run at a default strength of 40, fairly medium but visibly effective. It addresses what people like to call floating VHS grunge, or tape noise, as well as some small spots and often some horizontal rips or ripples. It's an avs script because there are so many conflicting versions of RemoveDirt and RemoveDirtMC. You find variations used everywhere. It's available as RemoveDirtMC.avs.

RemoveDirtMC requires the following support files:
- If you're using Windows7 or later, you'll likely need two older VisualC++ 32-bit runtimes that Microsoft in their infinite wisdom forgot to furnish. See the thread "Fix for problems running Avisynth's RemoveDirtMC", which has download attachments for Msvcp71.dll and Msvcr71.dll, which you simply copy into your Windows system folder. Instructions are in the posted thread.
- RemoveDirtMC also requires the RemoveDirt_v09.zip package). Create a folder called "RemoveDirt v09", download the .zip into that folder, and unzip it. Copy the required .dll filters into your Avisynth plugins.
- RemoveDirt also requires either RemoveGrain_v1_0_files.zip or the later RGTools.dll. Not to worry: both of those packages are installed with the QTGMC.zip package.

GradFun2DBmod(thr=1.8)
This is a smoother for color or luma banding (hard edges in areas where there should be smooth transitions). Look at the grainy, murky mess in the center of the ceiling hub as the camera moves in on it. The filter is posted as GradFun2Dbmod.zip. It requires the original GradFun2db, which is included in the .zip along with some docs and other links. GradFun2DBMod also requires MaskTools2, RGTools, and AddGrainC, all of which are included in the QTGMC package.

LSFmod()
AddGrainC(1.5,1.5)

LSFMod is a sophisticated temporal sharpener that attempts to do its job without creating more sharpening noise or edge artifacts/halos. It beats ordinary sharpeners, especially those in NLE's and VCRs. It's a favorite that has been around for a long time and is still used as a support filter by some bigger complex plugins. When it comes to sharpeners, follow this rule: always denoise before sharpening. It doesn't make sense to sharpen noise.

LSFmod is available in the wiki pages at http://avisynth.nl/index.php/LSFmod. The current version is LSFmod v1.9.avsi, which autoloads when called from your script. LSFmod requires the following:
- MaskTools2 and its supoport files, which are included with the QTGMC .zip package.
- RgTools, which is included with the QTGMC .zip package.
- aWarpSharp, current version as awarpSharp2, is used to tighten bleeding colors closer to edges. aWarpSharp can also be used as a general sharpener,but it does tend to thin narrow lines, which is why it works to reduce chroma bleed. aWarpSharp is posted at digitalfaq as aWarpSharp2_2015.zip. It requires the Microsoft 2015 VisualC++ runtimes, which is installed when you set up the QTGMC package.
- the VariableBlur plugin, posted as VariableBlur_070.zip. VariableBlur requires the Windows FFTW3 system library, which is supplied with the QTGMC.zip package. It also requires the free Microsoft VisualC++ 2010 runtime, which is used by several other plugins and Windows apps and is available at http://www.microsoft.com/en-us/downl...s.aspx?id=8328. VariableBlur is used as support for a number of other filters.

AddGrainC(1.5,1.5)
AddGrainC is used here to add a layer of ultra-fine film-like grain to help prevent a plastic, overfiltered look. It is available in the QTGMC package.

Crop(16,2,-4,-12).AddBorders(10,6,10,8)
One of the last steps is old dirty border cleanup. Crop() removes some border noise as well as a very small part of the left-hand stain which is more noise than stain. Crop() removes unwanted border pixels starting from the left and moving clockwise around the frame, in this order: 16 pixels are removed from the left border, 2 pixels from the twittery border across the top, 4 pixels from the right border, and 12 pixels of head switching noise from the bottom.
AddBorders(10,6,10,8) then replaces the old pixels with new, clean black border pixels and tries to center the image a little better. It adds pixels in this clockwise order: 10 pixels to the left side, 6 pixels to the top, 10 pixels to the right, and 8 pixels across the bottom. Black border pixels blend in perfectly with the black backgrounds of today's displays. They disappear with TV overscan.
Crop(): http://avisynth.nl/index.php/Crop
AddBorders(): http://avisynth.nl/index.php/AddBorders

SeparateFields().SelectEvery(4,0,3).Weave()
This is a standard method for returning progressive video to its original interlaced state. It begins by using SeparateFields() to separate full-sized frames into two half-height duplicate images. Each 2 progressive frames will be broken into 2 half-height duplicates each, for a total of 4 half-height fields. For every 4 fields (frame numbers start with 0), Selectevery(4,0,3) selects field 0 and field 4, which represent two instants in time from 2 progressive frames. The Weave() command then re-weaves those two fields into a single interlaced frame. This reduces the frame count in half and restores the frame rate to the original 25fps.
SelectEvery(): http://avisynth.nl/index.php/Select

### --- To RGB32 for VirtualDub filters --- ###
ConvertToRGB32(interlaced=true,matrix="Rec601")
return last

ConvertToRGB32 converts the current YV12 color to RGB for the VirtualDub filters that are loaded and applied to the output of the Avisynth script while the script is running. As with all colorspace conversions, you must inform Avisynth of the current interlaced state. In this case the video has been re-interlaced (interlaced=true) and the color matrix to be used is "Rec601", which is the colormmatrix for standard-definition digital video.

The VirtualDub filters and the settings used were saved in a virtualDub .vcf file. The vcf that was previously posted is Testclip1_settings.vcf. A .vcf is a text file that you download and save (the best place is in the folder with your video project. Do NOT save it in your VirtualDub plugins!). To load the same filters and settings that I used for all the video output for this project, click "File.." -> "Load processing settings". Locate your .vcf file, select it, and close the dialog. The filters that I used must be in your VDub plugins folder, or the .vcf file will not work properly.

The VirtualDub filters used were:
- ColorCamcorderDenoise v1.7 (ccd17.vdf) (http://www.digitalfaq.com/forum/atta...1&d=1544578132)
- ColorMill 2.1 (ColorMill.vdf) (http://www.digitalfaq.com/forum/atta...colormill21zip)
- the built-in VDub graphical "levels" control (built-in)
- gradation curves (gradation .vdf) (http://www.digitalfaq.com/forum/atta...1&d=1489408797)

How did I save that Avisynth script and Virtualdub output?
This was a very slow Avisynth filter that processed at a measley 3 to 4 fps, notably slowed by chubbyrain2 and to a lesser extent by the re-interlace statement. Because this would be the last processing step before encoding the video to MPEG or h.264 or web mounting, I saved it in VirtualDub using these steps:
- Click "Video", then click "Color depth...".
- In the "Video Color Depth" dialog, in the right-hand column under "Output format to compressor/display", select the round radio button for "4:2:0 planar YCbCr (YV12)". Why? Because if the next step is to encode to MPEG or h.264, the video will be converted to YV12 anyway. You may as well do it now, because saving as its current state of RGB will make it a much bigger file for no reason.
- Click "OK" to close that dialog window.



- Click "Video" again, then click "Compression...."


- (#1) In the "Select video compression" dialog, click "Lagarith lossless codec." You don't have Lagarith? Better get it, you'll need it. Get their Lagarith Installer (v1.3.27)" for 32 and 64 bit systems. Why? Because huffyuv can't compress YV12. Why not utvideo codec or some other? Go right ahead. Many PC media players can't read other Codecs, so get somethingn else if you want to live with that inconvenience.
- (#2) After you select Lagarith compression, Click "Configure".
- (#3) In the Lagarith setup window, in the "Mode" box , Click "YV12"
- (#4) Click "OK" to close that window.

- Click "OK" again to close the compression dialog.
- In VirtualDub's top menu click "File..."
- Click "Save as Avi..."
- Give the new file a location and name.

In VirtualDub, if you don't specify an output color depth and compressor, then by default the file is saved as uncompressed RGB. This would be several times the size of a losslessly compressed file.

Considerations for output:
I would suggest three considerations for final output after you've completed your processing and edits. The script that has been discussed generates interlaced PAL 25i. Physically, it's pure vanilla interlace. But I think you see from the buzzy and sawtooth edges on motion that the interlacing really is not very clean. it looks noisy and annoying every time something moves. It's a common problem with home movie cameras, which were designed for more forgiving CRT TVs.

There are two ways to work around that. One way is to output 50fps double-rate progressive video. You do that by not re-interlacing your video at the end of the script. In other words, at the end of the script remove the re-interlace statement and change these lines:

Code:
Crop(16,2,-4,-12).AddBorders(10,6,10,8)
SeparateFields().SelectEvery(4,0,3).Weave()
### --- To RGB32 for VirtualDub filters --- ###
ConvertToRGB32(interlaced=true,matrix="Rec601")
return last
To this:
Code:
Crop(16,2,-4,-12).AddBorders(10,6,10,8)
### --- To RGB32 for VirtualDub filters --- ###
ConvertToRGB32(interlaced=true,matrix="Rec601")
return last
This will give you 50fps progressive output. Filter the script's output with virtualDub and save it as YV12 in VirtualDub as shown earlier. The disadvantage? The frame rate is doubled, but besides that you might have trouble mounting 50fps playback on some websites. The advantage? Leave it as progressive, and you have several easy and quick choices for different output modes later.

For instance if you wanted truly interlaced 25i you just open the saved file as shown below and re-interlace it as usual for encoding to DVD or SD-Bluray:

Code:
AviSource("drive\:path to\saved\50fps_Progressive"
SeparateFields().SelectEvery(4,0,3).Weave()
That would give you 25fps interlaced. That's what I did in the original script to get the testclip1_25i_mp4 attached in post #5.

The other choice would be to remove all odd-numbered frames and output a 25fps progressive video. The motion might not always be quite as smooth as interlaced or as 50fps, but you won't have the annoying interlace distortion. And you can always encode it as "fake interlace" by telling your encoder to encode as interlaced, which would be compliant for DVD or Bluray. To make your 50fps progressive video into 25fps progressive:

Code:
AviSource("drive\:path to\saved\50fps_Progressive"
SelectEven()
Return last
That's pretty much what I did to get the testclip1_25p.mp4 attached in post #5. While the 25p video is physically progressive, it's encoded with interlace flags.

If you wanted to make that 50fps progressive into 25fps progressive and resize it as square-pixel for posting to the internet, use this:
Code:
AviSource("drive\:path to\saved\50fps_Progressive"
SelectEven()
Spline36Resize(640,480)
Return last
I know it looks intimidating at first, so take your time.

You now have a quick course in valid file saving and file output/encoding options. You also have a decent startup kit of Avisynth and virtualDub plugins, and you've seen how they're used. You'll see those same filters used again and again in this and other forums. You'll see these filters and more used in another recent thread with a bad video problem: Restoring VHS MPEG-2 transcoded tape. Or try another problem thread named What are first steps to restoring captured AVI? (with samples). The scripts used for repair and restoration are explained in post #2 and post #6.

Browsing through the "Restore, Filter, Improve Quality" restoration forum area can reveal hundreds of instructive posts over the past few weeks, and they go back for months and years. Project threads are the way most people learn about video and video processing.

And post examples of the other captures you mentioned.


Attached Files
File Type: avs chubbyrain2.avs (862 Bytes, 8 downloads)

Last edited by sanlyn; 03-12-2019 at 10:38 AM.
Reply With Quote
The following users thank sanlyn for this useful post: captainvic (03-12-2019), Delta (03-29-2021), dknoll (01-04-2020), ELinder (03-12-2019), willow5 (03-15-2019)
  #18  
03-12-2019, 03:04 PM
ELinder ELinder is offline
Unconfirmed User
 
Join Date: Oct 2018
Posts: 197
Thanked 33 Times in 27 Posts
Sanlyn, reading your posts is like listening to a masters thesis presentation on video restoration. There's so much information in your posts presented in such an easy to follow fashion that it should be required reading for anyone before asking a how to question. Thank you for taking the time to write up these answers, they help many more people than just the original poster.

Erich
Reply With Quote
The following users thank ELinder for this useful post: sanlyn (03-12-2019), willow5 (03-15-2019)
  #19  
03-12-2019, 03:43 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Much appreciated. I sometimes think it's too much too fast. Hope not.
Reply With Quote
  #20  
03-15-2019, 03:11 PM
willow5 willow5 is offline
Free Member
 
Join Date: Jun 2016
Posts: 137
Thanked 0 Times in 0 Posts
Woah thanks Sanlyn. As Erich says, your guide really is a masterclass in video restoration. Please bear with me while I digest all this information and reply with a considered response but for now I just wanted to express my thanks.

One immediate question I have is can I "undo" any avisynth scripts easily and revert to the original video without committing to changes I am making?

Thanks again
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
VirtualDub only captures 10 minutes of video? Klanoa27 Capture, Record, Transfer 1 10-15-2017 04:19 AM
Unattended VirtualDub captures? via Email or PM Capture, Record, Transfer 4 08-06-2017 11:28 AM
AIW Captures - error playing in Virtualdub Eagleaye Capture, Record, Transfer 1 12-22-2010 12:34 PM
VirtualDub (with Filters Pre-loaded) for Restoring [DOWNLOAD] admin Restore, Filter, Improve Quality 0 12-11-2010 06:11 AM
Using VirtualDub for screen captures (for menus) lordsmurf Author, Make Menus, Slideshows, Burn 1 12-10-2007 03:39 PM

Thread Tools



 
All times are GMT -5. The time now is 09:49 AM