#1  
06-01-2019, 07:14 PM
Angies_Husband Angies_Husband is offline
Free Member
 
Join Date: Jan 2019
Posts: 25
Thanked 5 Times in 4 Posts
Hi,

I recently received some DV encoded video files from a friend (sample attached). Regretfully, I do not have the original tape.

I was hoping to use Avisynth to do the initial passes at improving the quality (with plans to use Premier Pro for NLE and color correction later). I am a total Avisynth noob though...

That being said, I am having doubts that I am even getting the file set up properly to begin with!

Four questions it would be great to get some advice on:


1. I can't seem to open the file with "AviSource(file)" or "FFmpegSource(file)". I have to use "DirectShowSource(file)". Is that a problem? If it is, I'd like to fix it before I get any further!

2. When I do "DirectShowSource(file).info" I get back the following:
### Colorspace = YUY2
### Width 720 Height 480
### FPS 29.9700 (10000000/333667)
### FieldBased (Separated) Video: NO
### Parity: Bottom Field First
### Video Pitch: 1440 bytes

Does that make sense? And if so, does it imply that I need to use "assumeBFF()" in my code? I ask because it seems that "assumeTFF()" is more commonly used of what I've seen posted here.

3. From what I have read, it seems "ConvertToYV12(interlaced=true)" is the first step in any script? I assume that applies here as well?

4. Lastly, any recommendations on favorite filters to address the issues you observe? It is a low lit wedding, so lots of contrast (black tux vs white dress) with not lots to work with on the extremes it seems. My primary concern would be cleaning up the noise and speckles (e.g., particularly on the dress).

Many thanks in advance!


Attached Files
File Type: avi sample1_outputOld.avi (15.54 MB, 35 downloads)
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #2  
06-02-2019, 02:12 AM
thestarswitcher thestarswitcher is offline
Free Member
 
Join Date: Dec 2017
Posts: 95
Thanked 5 Times in 5 Posts
I can't help with Avisynth but I loaded one frame into Lightroom just to see if any of the color could be salvaged, and it looks promising:

vlcsnap-2019-06-01-23h20m18s419.JPG

-- merged --

Maybe this is more realistic to the actual lighting in the room. I increased "exposure" (gamma?), chose a white point that didn't make the shadows blue, adjusted r/g/b waveforms to try to make things look natural (this also improved contrast), and reduced color noise slightly (but not luma noise, that just made everything blurry). Does this color cast look more like how you remember the event?

You must be logged in to view this content; either login or register for the forum. The attached screen shots, before/after images, photos and graphics are created/posted for the benefit of site members. And you are invited to join our digital media community.


Reply With Quote
  #3  
06-02-2019, 01:13 PM
Angies_Husband Angies_Husband is offline
Free Member
 
Join Date: Jan 2019
Posts: 25
Thanked 5 Times in 4 Posts
Thanks! Yes, it seems that bringing back more of the detail (and color) will be possible. So that is good news!

I am thinking that AviSynth will be the key to unlocking an improvement in the "grain" that results from all the noise due to the low lighting. I am looking forward to what some of the other scripting wizards on the forum come up with!
Reply With Quote
  #4  
06-02-2019, 01:19 PM
lordsmurf's Avatar
lordsmurf lordsmurf is online now
Site Staff | Video
 
Join Date: Dec 2002
Posts: 13,632
Thanked 2,458 Times in 2,090 Posts
Quote:
Originally Posted by Angies_Husband View Post
I am thinking that AviSynth will be the key to unlocking an improvement in the "grain" that results from all the noise due to the low lighting. I am looking forward to what some of the other scripting wizards on the forum come up with!
My first attempt would be to see what KNLmeansCL() in Avisynth+ x64 can do. But I've not had time to rnu a sample, probably won't for several more days.

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
The following users thank lordsmurf for this useful post: Angies_Husband (06-02-2019)
  #5  
06-02-2019, 03:45 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,308 Times in 982 Posts
Quote:
Originally Posted by traal View Post
Maybe this is more realistic to the actual lighting in the room. I increased "exposure" (gamma?), chose a white point that didn't make the shadows blue, adjusted r/g/b waveforms to try to make things look natural (this also improved contrast), and reduced color noise slightly (but not luma noise, that just made everything blurry). Does this color cast look more like how you remember the event?

Attachment 10177
Good color work. Looks about right.

-- merged --

@Angies_Husband, thanks for posting a sample.

To get to your 4 questions first:

Quote:
Originally Posted by Angies_Husband View Post
1. I can't seem to open the file with "AviSource(file)" or "FFmpegSource(file)". I have to use "DirectShowSource(file)". Is that a problem? If it is, I'd like to fix it before I get any further!
Don't use "DirectShowSource" unless you're absolutely desperate. You can't open DV now with AviSource because you don't have SONY's dvsd DV codec in your system. Rather than use SONY or Panasonic, install the 32-bit Cedocida DV codec. Create a new folder on your PC and name it "Cedocida". Into that new folder download Cedocida's zip'd setup package from http://www.cithraidt.de/cedocida/ced..._0.2.3_bin.zip and unzip it. There are .txt files with instructions. After installing Cedocida, go into Virtualdub, click "Video..." -> then click "compression...", select "Cedocida DV Codec v0.2.3" in the left-hand codec list, and set the configuration dialog as shown below:



You can play DV video in media players because most PC players have built-in DV decoders, but that doesn't mean you have a Dv codec in your system setup.

Quote:
Originally Posted by Angies_Husband View Post
2. When I do "DirectShowSource(file).info" I get back the following:
### Colorspace = YUY2
### Width 720 Height 480
### FPS 29.9700 (10000000/333667)
### FieldBased (Separated) Video: NO
### Parity: Bottom Field First
### Video Pitch: 1440 bytes

Does that make sense? And if so, does it imply that I need to use "assumeBFF()" in my code? I ask because it seems that "assumeTFF()" is more commonly used of what I've seen posted here.
DirectShowSource is opening that file as YUY2, but that's not its original format. The source sample is 4:1:1 YV12, not 4:2:2. To get more accurate info from video files use the free MediainfoXP(https://www.videohelp.com/download/M...2019-04-27.zip). No installer is required, just unzip it and run the .exe.

The default field order in Avisynth is BottomFieldFirst. Unlike most other video formats uin the world, consumer DV is BFF. If you're using a TFF video in Avisynth you must use AssumeTFF() to process using the correct TFF field order. Otherwise it isn't required for BFF files because BFF is the default.

TYhe MediaInfoXP report (below) also shows an oddball audio sampling rate for the PCm audio. For DVD or Bluray, or for internet posting, you'll have to make a few changes there. For now, use the original lossless PCM as-is until you get to your final encode.

Code:
General
Complete name                            : D:\forum\faq\Angies_Husband\sample1_outputOld.avi
Format                                   : AVI
Format/Info                              : Audio Video Interleave
Commercial name                          : DVCPRO
File size                                : 15.5 MiB
Duration                                 : 4s 371ms
Overall bit rate mode                    : Constant
Overall bit rate                         : 29.8 Mbps
Recorded date                            : 2003-09-21 09:00:21.000
Writing library                          : VirtualDub build 35491/release

Video
ID                                       : 0
Format                                   : DV
Commercial name                          : DVCPRO
Codec ID                                 : dvsd
Codec ID/Hint                            : Sony
Duration                                 : 4s 371ms
Bit rate mode                            : Constant
Bit rate                                 : 24.4 Mbps
Encoded bit rate                         : 28.8 Mbps
Width                                    : 720 pixels
Height                                   : 480 pixels
Display aspect ratio                     : 4:3
Frame rate mode                          : Constant
Frame rate                               : 29.970 (29970/1000) fps
Standard                                 : NTSC
Color space                              : YUV
Chroma subsampling                       : 4:1:1
Bit depth                                : 8 bits
Scan type                                : Interlaced
Scan order                               : Bottom Field First
Compression mode                         : Lossy
Bits/(Pixel*Frame)                       : 2.357
Time code of first frame                 : 00:41:45;01
Time code source                         : Subcode time code
Stream size                              : 15.0 MiB (96%)
Encoding settings                        : ae mode=full automatic / wb mode=automatic / white balance= / fcm=manual focus

Audio
ID                                       : 1
Format                                   : PCM
Format settings, Endianness              : Little
Format settings, Sign                    : Signed
Codec ID                                 : 1
Duration                                 : 4s 371ms
Bit rate mode                            : Constant
Bit rate                                 : 1 024 Kbps
Channel(s)                               : 2 channels
Sampling rate                            : 32.0 KHz
Bit depth                                : 16 bits
Stream size                              : 546 KiB (3%)
Alignment                                : Aligned on interleaves
Interleave, duration                     : 37 ms (1.11 video frame)
Interleave, preload duration             : 500 ms
Quote:
Originally Posted by Angies_Husband View Post
3. From what I have read, it seems "ConvertToYV12(interlaced=true)" is the first step in any script? I assume that applies here as well?
Not true. If you are using filters that require YV12 video, you would have to use that code. You would also have to include either "interlace=true" or "interlace=false", whichever applies, because the interlaced or telecined state affects the way chroma is resampled. For telecined video, treat it as interlaced. http://avisynth.nl/index.php/Convert

Quote:
Originally Posted by Angies_Husband View Post
4. Lastly, any recommendations on favorite filters to address the issues you observe? It is a low lit wedding, so lots of contrast (black tux vs white dress) with not lots to work with on the extremes it seems. My primary concern would be cleaning up the noise and speckles (e.g., particularly on the dress).
Avisynth doesn't have a plugin that could restore every color and every dark detail in that under-exposed sample. It has some "Auto" filters but they aren't at all accurate for this kind of problem and can only do so much. You can't use a "brightness" filter because that will simply fog out the dark details, and "contrast" will totally wipe out the overhead lights and the ceiling. You would need the kind of non-linear suggestion that member traal has posted in post #2 and #3.

The speckled grain in the images isn't really grain. It's CMOS noise caused by underexposure. You'll find that most of the data in the darkest parts isn't detail at all, but mostly noise. The best Avisynth filter for that kind of dense clumpy junk would be TemporalDegrain. Note on its download wiki page at http://avisynth.nl/index.php/TemporalDegrain that it requires other plugins as support files. It also requires non-interlaced video and YV12 color. And note that like most industrial-strength plugins it's a slow filter.

Example usage:
Code:
ConvertToYV12(interlaced=true)  #<- if required)
SeparateFields()
TemporalDegrain()
Weave()
Or if you want true deinterlaced video, use QTGMC for double-rate deinterlacing. Another way to use TemporalDegrain at slightly weaker (less destructive) settings is: "TemporalDegrain(SAD1=400, SAD2=200, Sigma=12)".

I couldn't use temporal filters on the Lightroom image in post #3 for a demo because such filters require multiple frames to work with. Lightroom does look promising (thanks to traal for that idea) but doing it manually using one deinterlaced image at a time would take forever and you'd have to rebuild the video and sync the sound.

But doesn't Premiere Pro have similar image controls to Lightroom? You can't do much frame repair or denoising with PP, but it should have the same advanced color tools as Lightroom. Video doesn't have to be deinterlaced to work with color, and you wouldn't want to deinterlace with Premier Pro anyway, which isn't very good at it.

The image in post #3 has more accurate color. I didn't think that the dress or the festoons on the walls would be pure white. Note that the furniture in the room is closer to real white while everything is else is an off-white like the wedding gown. The only white dress I see is apparently worn by the woman in the left margin of the image.


Attached Images
File Type: png Cedocida setup.png (113.3 KB, 177 downloads)
Reply With Quote
The following users thank sanlyn for this useful post: Winsordawson (09-26-2019)
  #6  
06-02-2019, 04:47 PM
Angies_Husband Angies_Husband is offline
Free Member
 
Join Date: Jan 2019
Posts: 25
Thanked 5 Times in 4 Posts
Quote:
Originally Posted by sanlyn View Post
The speckled grain in the images isn't really grain. It's CMOS noise caused by underexposure. You'll find that most of the data in the darkest parts isn't detail at all, but mostly noise.
Ah yes, CMOS noise. I think "grain" is leftover terminology from photography (film!) days when I was shooting for a newspaper (bulk loaded B&W film). We'd "push a few stops" if you needed higher shutter speed for the content and then attempt to make up for it with longer developing time... ah... film...

Quote:
Originally Posted by sanlyn View Post
The best Avisynth filter for that kind of dense clumpy junk would be TemporalDegrain.
Good stuff - I will try this once I get that first problem solved with the codecs...

Quote:
Originally Posted by sanlyn View Post
But doesn't Premiere Pro have similar image controls to Lightroom? You can't do much frame repair or denoising with PP, but it should have the same advanced color tools as Lightroom. Video doesn't have to be deinterlaced to work with color, and you wouldn't want to deinterlace with Premier Pro anyway, which isn't very good at it.
Correct, Premier Pro has very nice color correction abilities. With PP Lumetri Scopes you can now also get things like a VectorScope (e.g., for checking skin tones)

To your exact point - as I understand it, the color correction is best workflow sequenced AFTER any attempts to do frame repair and denoising, etc... (right?)

My initial thought was actually just to use some of the DigitalFAQ filters in VirtualDub to simply do the denoising. That almost feels too easy, which it why I am thinking AviSynth would provide the best end result (and worth the extra effort to figure out)

Many thanks for the help so far! Much appreciated!

-- merged --

Ok, so making some (slow) progress, and having fun playing with some of the parameters... but running into some new problems.

From a de-noising standpoint, and evaluating static frame grabs (frame 42) - Nice results from Temporal Denoiser, as show below:

ORIGINAL
S1_raw.jpg

Using sanlyn's recommendation: TemporalDegrain(SAD1=400, SAD2=200, Sigma=12)
You must be logged in to view this content; either login or register for the forum. The attached screen shots, before/after images, photos and graphics are created/posted for the benefit of site members. And you are invited to join our digital media community.


Same as sanlyn's, + HQ=2 : TemporalDegrain(SAD1=400, SAD2=200, Sigma=12, HQ=2)
Which appears to add another filter pass -> NR2.HQDn3D(0,0,4,1)
S1_Tdegrain_less.jpg

Not sure if the last one is too soft...

I also played around with box size (bw, bh) and also lower values of (sigma)... nominal differences (mostly a mind game of "well, they are both better, but one is a bit different, I can't tell which is more correct...")

However, the static frame view is one thing... I think my current issue is that when the video is played back, the TemporalDegrain filter seems to create a lot of "shimmer" (I don't know what to call it).

It is most noticeable in the detail of the drapery folds surrounding the windows in the background. The effect gets worse (somewhat) with HQ=2.

I am curious if there is a recommended approach to addressing this? Is this due to the way the filter picks up camera shake?

A few thoughts/questions:
1.Should I attempt to add some sort of stabilization/deshake prior to the TemporalDegrain? (I don't know how that would work, but it would be fun to learn)
2.Should I ditch TemporalDegrain in favor of QTGMC? (that filter seems much more intimidating to understand vs TemporalDegrain and the FFT approach)... QTGMC seems to drawn on the KNLmeansCL() filter mentioned by LordSmurf?
3.Should I attempt to use TemporalDegrain2 (in AvySynth+)

Thanks!


Reply With Quote
  #7  
06-05-2019, 12:16 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,308 Times in 982 Posts
I tried temporalDegrtain2 -- it softened details, and some details disappeared altogether.
There's no perfect filter for CMOS noise. Anyway, I haven't found one.

Instead of making the video non-interlaced with SeparateFields(), you can use QTGMC at preset="medium" which handles some of the shimmer. Be sure to specify AssumeTFF() bnefore using it.
Code:
AssumeTFF(()
QTGMC(preset="medium")
QTGMC will double-rate deinterlace, so to restore the interlaced state at some point you can use:
Code:
SeparateFields().SelectEvery(4,0,3).Weave()
QTGMC isn't very effective with CMOS noise. It's a different kind of plugin, mostly concerned with undoing or avoiding the shimmer and loss of precision when creating full-sized frames from half-height fields, but the side benefits are some level of general noise reduction. Slower presets = more denoising (sometimes too much), slower runtime. Note that QTGMC and TemporalDegrain and similar combinations won't win any speed contests. Both filters use a lot of memory swapping, and in combination they bump into each other doing it. Some people just run them in separate scripts. Pain in the neck.

KNLmeansCL won't do very much against CMOS noise, it's for finer-grained gaussian noise (really tiny-grained dots). You need a special series of GPU to use it. KNLMeans() is the original version which works with any graphics card. It doesn't hit the heavier, clumpy stuff as well as the original TemporalDegrain. You haven't seen slow until you see KNLMeans.

You can use the Stab() plugin to calm camera shake a little, but note that you'll have to adjust frame borders later with Crop() and then AddBorders() to restore the original frame size after cropping black border. If you do run Stab(), run it in a separate script by itself and save the output as YV12 using Lagarith's codec. Then use a second script to study the output file, then to clean up the borders and run your other cleanup routines. I'd advise not to resize the core image just to fill in the frame. Resizing always has a cost and tends to undo the distortion cleanup of many filters. It doesn't do much good anyway; borders are usually invisible against the black backgrounds of wide screen monitors, and TV overscan will mask anything in the usual border area including parts of the image (Yes, HDTV uses overscan by default. On many cheaper sets it can't be defeated). As for purists: they often mess around with borders but not with the core image.
Reply With Quote
The following users thank sanlyn for this useful post: captainvic (06-12-2019)
  #8  
06-11-2019, 10:45 PM
Angies_Husband Angies_Husband is offline
Free Member
 
Join Date: Jan 2019
Posts: 25
Thanked 5 Times in 4 Posts
Final got around to playing with this some more over the weekend. Thanks again for the feedback so far, and traal's color correction work sets a high bar indeed!

On the Avisynth side, I am having some trouble with my workflow...
QTGMC
TemporalDegrain
AddGrainC
ConvertToRGB32

works fine up to this point, and can be brought into Premier Pro.

If I try to add a VDub filter:
CCD(10,1)

The file will not import into Premier Pro... So I am baffled by this, but I am wondering if I have some missing conversions needed for Vdub?

Code:
Import("C:\path\TemporalDegrain.avs")
LoadVirtualDubPlugin("C:\path\ccd_sse2.vdf", "CCD", 1)

###-------FIRST---------
AviSource("F:\path\sample1_outputOld.avi")
### note source is DV file YV12 4:1:1

QTGMC(preset="slower", EZDenoise=1, NoisePreset="slow", DenoiseMC=True, sigma=3)
a=last

###-------SECOND--------

#ConvertToYV12(interlaced=true)  #<- if required)
###Note already deinterlaced from QTGMC
#SeparateFields()  
a.TemporalDegrain(SAD1=400, SAD2=200, Sigma=3)
#Weave()

###add fine grain noise to luma only
AddGrainC(0.75,0)

###To return to interlaced
#SeparateFields().SelectEvery(4,0,3).Weave()

###Convert to RGB32
ConvertToRGB32(interlaced=FALSE,matrix="Rec601")

###------THIRD-----------

###Vdub filter previously loaded
CCD(10,1) # from 0 to 100
So with the code above, I can run the FIRST and SECOND segments, but adding the THIRD causes files that don't open in Premier Pro. The AVI files will open back up in Vdub though... I've also tried adding the filter in Vdub (not in the Avisynth code) and I get the same result.

I am currently doing this with uncompressed AVIs (and once I get that working will attempt to get Lagarith to work with Premier... another challenge...)

Thoughts on the workflow and the Vdub filter issue? Many thanks in advance!

Last edited by Angies_Husband; 06-11-2019 at 10:47 PM. Reason: typo in code
Reply With Quote
  #9  
06-12-2019, 12:24 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,308 Times in 982 Posts
I'm not sure what "result" you refer to in Premiere, since I've never had much use for it. It usually has a problem with huffyuv rather than Lagarith. If you have 64-bit PP but are using 32-bit Avisynth with 32-bit codecs, that might be problem. PP experts might be able to help with that. I've used AfterEffects for years with huffyuv and Lagarith, no problem.

Meanwhile your Avisyntjh script has logic glitches. First, the script uses two video clips, not one. The first video clip is created when you open a file with AviSource. The second clip is created in memory and is named "a". The only thing that happens to "a" is that it gets a dose of TemporalDegrain. All the other processing gets applied to the clip opened by AviSource. At the end of the script, Avisynth doesn't know which video clip to return. Unless you have some reason for creating "a" and then not doling anything with it, I'd just discard it.

Code:
Import("C:\path\TemporalDegrain.avs")
LoadVirtualDubPlugin("C:\path\ccd_sse2.vdf", "CCD", 1)

###-------FIRST---------
AviSource("F:\path\sample1_outputOld.avi")
### note source is DV file YV12 4:1:1

QTGMC(preset="slower", EZDenoise=1, NoisePreset="slow", DenoiseMC=True, sigma=3)

###-------SECOND--------
TemporalDegrain(SAD1=400, SAD2=200, Sigma=3)

###add fine grain noise to luma only
AddGrainC(0.75,0)

###To return to interlaced
SeparateFields().SelectEvery(4,0,3).Weave()

###Convert to RGB32
ConvertToRGB32(interlaced=FALSE,matrix="Rec601")

###------THIRD-----------
###Vdub filter previously loaded
CCD(10,1) # from 0 to 100
Or if for some reason you really need "a'", place execution onto it by explictly naming it in code. Then, because there are really two clips involved, tell Avisynth what to return:

Code:
Import("C:\path\TemporalDegrain.avs")
LoadVirtualDubPlugin("C:\path\ccd_sse2.vdf", "CCD", 1)

###-------FIRST---------
AviSource("F:\path\sample1_outputOld.avi")
### note source is DV file YV12 4:1:1

QTGMC(preset="slower", EZDenoise=1, NoisePreset="slow", DenoiseMC=True, sigma=3)
a = last    #<-- create clip a
a           #<-- focus operations on clip a
            
###-------SECOND--------
TemporalDegrain(SAD1=400, SAD2=200, Sigma=3)

###add fine grain noise to luma only
AddGrainC(0.75,0)

###To return to interlaced
SeparateFields().SelectEvery(4,0,3).Weave()

###Convert to RGB32
ConvertToRGB32(interlaced=FALSE,matrix="Rec601")

###------THIRD-----------
###Vdub filter previously loaded
CCD(10,1) # from 0 to 100

Return last   #<-- return the last thing you did, all of which was applied to clip a
Reply With Quote
The following users thank sanlyn for this useful post: Angies_Husband (06-12-2019), captainvic (06-12-2019)
  #10  
06-12-2019, 03:19 PM
Angies_Husband Angies_Husband is offline
Free Member
 
Join Date: Jan 2019
Posts: 25
Thanked 5 Times in 4 Posts
Thanks sanlyn, very helpful clarifications!

to your point about the clip "a" created, I was not sure if QTGMC output needed to be saved as a separate clip to then propagate through the rest of the flow. Clearly that is not the case and your revised code is much cleaner.

On the logic error, I assumed that Avisynth would continue to chain the resulting clips in serial...
So I thought:
Code:
a.Function()
NextFunction()
was the same as:
Code:
a
Function()
NextFunction()
Thanks for clarifying the logic error!

Quick question on the re-interlacing...

When you do:
Code:
SeparateFields().SelectEvery(4,0,3).Weave()
The result is back to the original frame frate of the source, with alternating Top and Bottom frames properly sequenced?

Is this because the current clip is twice the frame rate, but each frame has either a top or bottom, and then calling
"SeparateFields()" creates frames that go: Frame1-top, Frame1-bottom(blank), Frame2-top(blank), Frame2-bottom,etc...
The "SelectEvery()" pulls out the frames with actual info
and then "weave()" joins these top and bottom frames into an interlaced clip at the original frame rate?
Is that understanding correct?

If so - I assume I need to fix this last line with the RGB conversion
Code:
ConvertToRGB32(interlaced=FALSE,matrix="Rec601")
And should read "interlaced=TRUE" ?
Reply With Quote
  #11  
06-12-2019, 04:53 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,308 Times in 982 Posts
Quote:
Originally Posted by Angies_Husband View Post
When you do:
Code:
SeparateFields().SelectEvery(4,0,3).Weave()
The result is back to the original frame frate of the source, with alternating Top and Bottom frames properly sequenced?
Correct.

Quote:
Originally Posted by Angies_Husband View Post
Is this because the current clip is twice the frame rate, but each frame has either a top or bottom, and then calling
"SeparateFields()" creates frames that go: Frame1-top, Frame1-bottom(blank), Frame2-top(blank), Frame2-bottom,etc...
The "SelectEvery()" pulls out the frames with actual info
and then "weave()" joins these top and bottom frames into an interlaced clip at the original frame rate?
Is that understanding correct?
Not quite.

Progressive frames contain only one image. They don't really have "fields". However, SeparateFields() when applied to a progressive frame treats the frame as if two fields did exist. For each frame it creates 2 brand new half-height fields, each 240 scanlines in height (PAL would be 288 lines each). The first half-height field contains the even scanlines from the original frame (top "field"), the second half-height field contains the original odd-numbered scanlines (bottom "field"). Because the original frame contains only one continuous image, the two new fields are copies of each other.

SelectEvery(4,0,3) takes the first 4 of these new fields. With fields numbering from 0 to 3, those 4 fields are (#0) the top field from frame 1, (#1) the bottom field from frame 1, (#2) the top field from frame 2, and (#3) the bottom field from frame 2. SelectEvery() then selects the top field from frame 1 and the bottom field from frame 2. Then Weave() arranges them as two fields properly interlaced inside one interlaced frame.

This is another reason why it's important to use AssumeTFF() in a script that plays with fields and interlacing. The Avisynth default assumption is BFF (Bottom Field First). If you use that assumption, then think what happens when SeparateFields() starts this process of creating new fields from the scanlines in the original TFF progressive frame. If the assumption is BFF, the scanlines and the new fields are handled that way, and the newly created interlace frames contain scanlines that are in the reverse order of the original TFF file.

Also consider why most pro's refer to software deinterlacing as a destructive process. They insist that interlacing is something done only when necessary. The process is destructive for several reasons, not the least of which is that when a progressive frame is separated into two fields the fields are pretty close copies of each other but are not exact in every respect. Two sets of scanlines are retained, two sets are discarded. Add to that the numeric rounding and resizing errors that occur when an interlaced frame is split into two fields and two new completely resized and interpolated frames are created from two half-height fields. Fortunately QTGMC takes this stuff into effect when doing its job, and even cleans up a lot of stuff along the way, but QTGMC can't solve everything in the process. Even when you reinterlace for the sake of the output media's requirements, you are reinterlacing errors and omissions. Ultimately the answer is, do you want to keep the original dirt, noises, defects and other junk in the original, or would you prefer some cleanup that at least improves what you started with? Also keep in mind that many filters can be used with SeparateFields() and Weave(), which doesn't require full deinterlacing.

Quote:
Originally Posted by Angies_Husband View Post
If so - I assume I need to fix this last line with the RGB conversion
Code:
ConvertToRGB32(interlaced=FALSE,matrix="Rec601")
And should read "interlaced=TRUE" ?
Yep, you are correct, and this time you caught me in an error. I didn't remember that the re-interlace statement had been activated again. So "interlaced=true" would be correct. Chalk up a boo-boo on my side of the board, folks. Good work.
Reply With Quote
The following users thank sanlyn for this useful post: Angies_Husband (06-12-2019)
  #12  
06-12-2019, 06:21 PM
Angies_Husband Angies_Husband is offline
Free Member
 
Join Date: Jan 2019
Posts: 25
Thanked 5 Times in 4 Posts
Ah, starting to make more sense now - Thanks! So much more refreshing to actually understand what is going on under the hood.

So, since earlier in the script I run QTGMC on an 30fps interlaced source...
The output of that is a 60fps progressive source, where each new frame is a full height frame created by "doubling" the Even or "doubling" the Odd scanline field (i.e., Bob deinterlace)?

For example, Given:
F1 = source frame 1 = field(F1even) + field(F1odd)
F1' = QTGMC output frame 1
etc..

The frames out of QTGMC would be:
F1'= field(F1even)+field(F1even)
F2'= field(F1odd)+field(F1odd)
F3'= field(F2even)+field(F2even)
F4'= field(F2even)+field(F2odd)

Is that right?

Quote:
Originally Posted by sanlyn View Post
Progressive frames contain only one image. They don't really have "fields". However, SeparateFields() when applied to a progressive frame treats the frame as if two fields did exist. For each frame it creates 2 brand new half-height fields, each 240 scanlines in height (PAL would be 288 lines each). The first half-height field contains the even scanlines from the original frame (top "field"), the second half-height field contains the original odd-numbered scanlines (bottom "field"). Because the original frame contains only one continuous image, the two new fields are copies of each other.
Ah ok, so now we have that progressive 60fps output from QTGMC fed into SeparateFields(), and as you describe above, this would create the even and odd half-height field frames (e.g., it's half height because there is no doubling)

Given:
F1'' = Frame 1 from SeparateFields()
Then:
F1'' = F1'(even)
F2'' = F1'(odd)
F3'' = F2'(even)
F4'' = F2'(odd)

Quote:
Originally Posted by sanlyn View Post
SelectEvery(4,0,3) takes the first 4 of these new fields. With fields numbering from 0 to 3, those 4 fields are (#0) the top field from frame 1, (#1) the bottom field from frame 1, (#2) the top field from frame 2, and (#3) the bottom field from frame 2. SelectEvery() then selects the top field from frame 1 and the bottom field from frame 2. Then Weave() arranges them as two fields properly interlaced inside one interlaced frame.
Now the SelectEvery(4,0,3) takes those four new frames:
F1'', F2'', F3'', F4''
and takes 0 and 3 = F1'' and F4''

lastly Weave() interlaces F1'' and F4''

Substituting from before
F1'' = F1'(even) = field(F1even)
F4'' = F2'(odd) = field(F1odd)

so we are re-interlacing the original even+odd fields from the original interlaced Frame 1

Phew! Complicated!
Is that right?
Reply With Quote
  #13  
06-12-2019, 07:24 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,308 Times in 982 Posts
You can get into overthinking it.
By the way, if you're working with consumer DV-AVI, the field priority is BFF.

Classic html page: Neuron2_How To Analyze Video Frame Structure
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
Legit source for Windows XP SP2? ELinder Computers 16 09-05-2019 05:21 PM
Advantages of Avisynth vs. Avisynth+ (plus)? homefire Restore, Filter, Improve Quality 4 07-02-2019 06:54 PM
PAL progressive source in interlaced DVD camjac251 Restore, Filter, Improve Quality 1 01-03-2019 04:35 PM
How to put 1080i source onto Blu-ray MKV?? wigam Encode, Convert for discs 6 03-03-2018 07:59 PM
1080/50p source - what Blu-ray output? TIGER8855 Encode, Convert for discs 3 01-05-2018 04:10 PM

Thread Tools



 
All times are GMT -5. The time now is 09:36 PM