Working with DV source in Avisynth?
1 Attachment(s)
Hi,
I recently received some DV encoded video files from a friend (sample attached). Regretfully, I do not have the original tape. I was hoping to use Avisynth to do the initial passes at improving the quality (with plans to use Premier Pro for NLE and color correction later). I am a total Avisynth noob though... That being said, I am having doubts that I am even getting the file set up properly to begin with! Four questions it would be great to get some advice on: 1. I can't seem to open the file with "AviSource(file)" or "FFmpegSource(file)". I have to use "DirectShowSource(file)". Is that a problem? If it is, I'd like to fix it before I get any further! :laugh: 2. When I do "DirectShowSource(file).info" I get back the following: ### Colorspace = YUY2 ### Width 720 Height 480 ### FPS 29.9700 (10000000/333667) ### FieldBased (Separated) Video: NO ### Parity: Bottom Field First ### Video Pitch: 1440 bytes Does that make sense? And if so, does it imply that I need to use "assumeBFF()" in my code? I ask because it seems that "assumeTFF()" is more commonly used of what I've seen posted here. 3. From what I have read, it seems "ConvertToYV12(interlaced=true)" is the first step in any script? I assume that applies here as well? 4. Lastly, any recommendations on favorite filters to address the issues you observe? It is a low lit wedding, so lots of contrast (black tux vs white dress) with not lots to work with on the extremes it seems. My primary concern would be cleaning up the noise and speckles (e.g., particularly on the dress). Many thanks in advance! |
2 Attachment(s)
I can't help with Avisynth but I loaded one frame into Lightroom just to see if any of the color could be salvaged, and it looks promising:
Attachment 10176 -- merged -- Maybe this is more realistic to the actual lighting in the room. I increased "exposure" (gamma?), chose a white point that didn't make the shadows blue, adjusted r/g/b waveforms to try to make things look natural (this also improved contrast), and reduced color noise slightly (but not luma noise, that just made everything blurry). Does this color cast look more like how you remember the event? Attachment 10177 |
Thanks! Yes, it seems that bringing back more of the detail (and color) will be possible. So that is good news!
I am thinking that AviSynth will be the key to unlocking an improvement in the "grain" that results from all the noise due to the low lighting. I am looking forward to what some of the other scripting wizards on the forum come up with! |
Quote:
|
1 Attachment(s)
Quote:
-- merged -- @Angies_Husband, thanks for posting a sample. To get to your 4 questions first: Quote:
http://www.digitalfaq.com/forum/atta...1&d=1559508458 You can play DV video in media players because most PC players have built-in DV decoders, but that doesn't mean you have a Dv codec in your system setup. Quote:
The default field order in Avisynth is BottomFieldFirst. Unlike most other video formats uin the world, consumer DV is BFF. If you're using a TFF video in Avisynth you must use AssumeTFF() to process using the correct TFF field order. Otherwise it isn't required for BFF files because BFF is the default. TYhe MediaInfoXP report (below) also shows an oddball audio sampling rate for the PCm audio. For DVD or Bluray, or for internet posting, you'll have to make a few changes there. For now, use the original lossless PCM as-is until you get to your final encode. Code:
General Quote:
Quote:
The speckled grain in the images isn't really grain. It's CMOS noise caused by underexposure. You'll find that most of the data in the darkest parts isn't detail at all, but mostly noise. The best Avisynth filter for that kind of dense clumpy junk would be TemporalDegrain. Note on its download wiki page at http://avisynth.nl/index.php/TemporalDegrain that it requires other plugins as support files. It also requires non-interlaced video and YV12 color. And note that like most industrial-strength plugins it's a slow filter. Example usage: Code:
ConvertToYV12(interlaced=true) #<- if required) I couldn't use temporal filters on the Lightroom image in post #3 for a demo because such filters require multiple frames to work with. Lightroom does look promising (thanks to traal for that idea) but doing it manually using one deinterlaced image at a time would take forever and you'd have to rebuild the video and sync the sound. But doesn't Premiere Pro have similar image controls to Lightroom? You can't do much frame repair or denoising with PP, but it should have the same advanced color tools as Lightroom. Video doesn't have to be deinterlaced to work with color, and you wouldn't want to deinterlace with Premier Pro anyway, which isn't very good at it. The image in post #3 has more accurate color. I didn't think that the dress or the festoons on the walls would be pure white. Note that the furniture in the room is closer to real white while everything is else is an off-white like the wedding gown. The only white dress I see is apparently worn by the woman in the left margin of the image. |
3 Attachment(s)
Quote:
Quote:
Quote:
To your exact point - as I understand it, the color correction is best workflow sequenced AFTER any attempts to do frame repair and denoising, etc... (right?) My initial thought was actually just to use some of the DigitalFAQ filters in VirtualDub to simply do the denoising. That almost feels too easy, which it why I am thinking AviSynth would provide the best end result (and worth the extra effort to figure out) :) Many thanks for the help so far! Much appreciated! -- merged -- Ok, so making some (slow) progress, and having fun playing with some of the parameters... but running into some new problems. From a de-noising standpoint, and evaluating static frame grabs (frame 42) - Nice results from Temporal Denoiser, as show below: ORIGINAL Attachment 10183 Using sanlyn's recommendation: TemporalDegrain(SAD1=400, SAD2=200, Sigma=12) Attachment 10184 Same as sanlyn's, + HQ=2 : TemporalDegrain(SAD1=400, SAD2=200, Sigma=12, HQ=2) Which appears to add another filter pass -> NR2.HQDn3D(0,0,4,1) Attachment 10185 Not sure if the last one is too soft... I also played around with box size (bw, bh) and also lower values of (sigma)... nominal differences (mostly a mind game of "well, they are both better, but one is a bit different, I can't tell which is more correct...") :laugh: However, the static frame view is one thing... I think my current issue is that when the video is played back, the TemporalDegrain filter seems to create a lot of "shimmer" (I don't know what to call it). It is most noticeable in the detail of the drapery folds surrounding the windows in the background. The effect gets worse (somewhat) with HQ=2. I am curious if there is a recommended approach to addressing this? Is this due to the way the filter picks up camera shake? A few thoughts/questions: 1.Should I attempt to add some sort of stabilization/deshake prior to the TemporalDegrain? (I don't know how that would work, but it would be fun to learn) 2.Should I ditch TemporalDegrain in favor of QTGMC? (that filter seems much more intimidating to understand vs TemporalDegrain and the FFT approach)... QTGMC seems to drawn on the KNLmeansCL() filter mentioned by LordSmurf? 3.Should I attempt to use TemporalDegrain2 (in AvySynth+) Thanks! |
I tried temporalDegrtain2 -- it softened details, and some details disappeared altogether.
There's no perfect filter for CMOS noise. Anyway, I haven't found one. Instead of making the video non-interlaced with SeparateFields(), you can use QTGMC at preset="medium" which handles some of the shimmer. Be sure to specify AssumeTFF() bnefore using it. Code:
AssumeTFF(() Code:
SeparateFields().SelectEvery(4,0,3).Weave() KNLmeansCL won't do very much against CMOS noise, it's for finer-grained gaussian noise (really tiny-grained dots). You need a special series of GPU to use it. KNLMeans() is the original version which works with any graphics card. It doesn't hit the heavier, clumpy stuff as well as the original TemporalDegrain. You haven't seen slow until you see KNLMeans. You can use the Stab() plugin to calm camera shake a little, but note that you'll have to adjust frame borders later with Crop() and then AddBorders() to restore the original frame size after cropping black border. If you do run Stab(), run it in a separate script by itself and save the output as YV12 using Lagarith's codec. Then use a second script to study the output file, then to clean up the borders and run your other cleanup routines. I'd advise not to resize the core image just to fill in the frame. Resizing always has a cost and tends to undo the distortion cleanup of many filters. It doesn't do much good anyway; borders are usually invisible against the black backgrounds of wide screen monitors, and TV overscan will mask anything in the usual border area including parts of the image (Yes, HDTV uses overscan by default. On many cheaper sets it can't be defeated). As for purists: they often mess around with borders but not with the core image. |
Final got around to playing with this some more over the weekend. Thanks again for the feedback so far, and traal's color correction work sets a high bar indeed! :)
On the Avisynth side, I am having some trouble with my workflow... QTGMC TemporalDegrain AddGrainC ConvertToRGB32 works fine up to this point, and can be brought into Premier Pro. If I try to add a VDub filter: CCD(10,1) The file will not import into Premier Pro... So I am baffled by this, but I am wondering if I have some missing conversions needed for Vdub? Code:
Import("C:\path\TemporalDegrain.avs") I am currently doing this with uncompressed AVIs (and once I get that working will attempt to get Lagarith to work with Premier... another challenge...) Thoughts on the workflow and the Vdub filter issue? Many thanks in advance! |
I'm not sure what "result" you refer to in Premiere, since I've never had much use for it. It usually has a problem with huffyuv rather than Lagarith. If you have 64-bit PP but are using 32-bit Avisynth with 32-bit codecs, that might be problem. PP experts might be able to help with that. I've used AfterEffects for years with huffyuv and Lagarith, no problem.
Meanwhile your Avisyntjh script has logic glitches. First, the script uses two video clips, not one. The first video clip is created when you open a file with AviSource. The second clip is created in memory and is named "a". The only thing that happens to "a" is that it gets a dose of TemporalDegrain. All the other processing gets applied to the clip opened by AviSource. At the end of the script, Avisynth doesn't know which video clip to return. Unless you have some reason for creating "a" and then not doling anything with it, I'd just discard it. Code:
Import("C:\path\TemporalDegrain.avs") Code:
Import("C:\path\TemporalDegrain.avs") |
Thanks sanlyn, very helpful clarifications!
to your point about the clip "a" created, I was not sure if QTGMC output needed to be saved as a separate clip to then propagate through the rest of the flow. Clearly that is not the case and your revised code is much cleaner. On the logic error, I assumed that Avisynth would continue to chain the resulting clips in serial... So I thought: Code:
a.Function() Code:
a Quick question on the re-interlacing... When you do: Code:
SeparateFields().SelectEvery(4,0,3).Weave() Is this because the current clip is twice the frame rate, but each frame has either a top or bottom, and then calling "SeparateFields()" creates frames that go: Frame1-top, Frame1-bottom(blank), Frame2-top(blank), Frame2-bottom,etc... The "SelectEvery()" pulls out the frames with actual info and then "weave()" joins these top and bottom frames into an interlaced clip at the original frame rate? Is that understanding correct? If so - I assume I need to fix this last line with the RGB conversion Code:
ConvertToRGB32(interlaced=FALSE,matrix="Rec601") |
Quote:
Quote:
Progressive frames contain only one image. They don't really have "fields". However, SeparateFields() when applied to a progressive frame treats the frame as if two fields did exist. For each frame it creates 2 brand new half-height fields, each 240 scanlines in height (PAL would be 288 lines each). The first half-height field contains the even scanlines from the original frame (top "field"), the second half-height field contains the original odd-numbered scanlines (bottom "field"). Because the original frame contains only one continuous image, the two new fields are copies of each other. SelectEvery(4,0,3) takes the first 4 of these new fields. With fields numbering from 0 to 3, those 4 fields are (#0) the top field from frame 1, (#1) the bottom field from frame 1, (#2) the top field from frame 2, and (#3) the bottom field from frame 2. SelectEvery() then selects the top field from frame 1 and the bottom field from frame 2. Then Weave() arranges them as two fields properly interlaced inside one interlaced frame. This is another reason why it's important to use AssumeTFF() in a script that plays with fields and interlacing. The Avisynth default assumption is BFF (Bottom Field First). If you use that assumption, then think what happens when SeparateFields() starts this process of creating new fields from the scanlines in the original TFF progressive frame. If the assumption is BFF, the scanlines and the new fields are handled that way, and the newly created interlace frames contain scanlines that are in the reverse order of the original TFF file. Also consider why most pro's refer to software deinterlacing as a destructive process. They insist that interlacing is something done only when necessary. The process is destructive for several reasons, not the least of which is that when a progressive frame is separated into two fields the fields are pretty close copies of each other but are not exact in every respect. Two sets of scanlines are retained, two sets are discarded. Add to that the numeric rounding and resizing errors that occur when an interlaced frame is split into two fields and two new completely resized and interpolated frames are created from two half-height fields. Fortunately QTGMC takes this stuff into effect when doing its job, and even cleans up a lot of stuff along the way, but QTGMC can't solve everything in the process. Even when you reinterlace for the sake of the output media's requirements, you are reinterlacing errors and omissions. Ultimately the answer is, do you want to keep the original dirt, noises, defects and other junk in the original, or would you prefer some cleanup that at least improves what you started with? Also keep in mind that many filters can be used with SeparateFields() and Weave(), which doesn't require full deinterlacing. Quote:
|
Ah, starting to make more sense now - Thanks! So much more refreshing to actually understand what is going on under the hood.
So, since earlier in the script I run QTGMC on an 30fps interlaced source... The output of that is a 60fps progressive source, where each new frame is a full height frame created by "doubling" the Even or "doubling" the Odd scanline field (i.e., Bob deinterlace)? For example, Given: F1 = source frame 1 = field(F1even) + field(F1odd) F1' = QTGMC output frame 1 etc.. The frames out of QTGMC would be: F1'= field(F1even)+field(F1even) F2'= field(F1odd)+field(F1odd) F3'= field(F2even)+field(F2even) F4'= field(F2even)+field(F2odd) Is that right? Quote:
Given: F1'' = Frame 1 from SeparateFields() Then: F1'' = F1'(even) F2'' = F1'(odd) F3'' = F2'(even) F4'' = F2'(odd) Quote:
F1'', F2'', F3'', F4'' and takes 0 and 3 = F1'' and F4'' lastly Weave() interlaces F1'' and F4'' Substituting from before F1'' = F1'(even) = field(F1even) F4'' = F2'(odd) = field(F1odd) so we are re-interlacing the original even+odd fields from the original interlaced Frame 1 Phew! Complicated! :laugh: Is that right? |
You can get into overthinking it.
By the way, if you're working with consumer DV-AVI, the field priority is BFF. Classic html page: Neuron2_How To Analyze Video Frame Structure |
Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.