digitalFAQ.com Forums [Archives]

digitalFAQ.com Forums [Archives] (http://www.digitalfaq.com/archives/)
-   Avisynth Scripting (http://www.digitalfaq.com/archives/avisynth/)
-   -   Avisynth: MA script for interlaced sources? (http://www.digitalfaq.com/archives/avisynth/6120-avisynth-ma-script.html)

Boulder 10-15-2003 07:56 AM

Avisynth: MA script for interlaced sources?
 
Code:

nf=0

MPEG2Source("path\video.d2v")

Crop(enter your values here, crop height mod 4, width mod 2!)

SeparateFields()
UnDot()
BicubicResize(enter your values here, height/2!)
MergeChroma(Blur(1.5))
MergeLuma(Blur(0.1)) # use only if the video won't get too blurry!
STMedianFilter(4,32,0,0)

SwitchThreshold = (Width<=352) ? 4 : (Width<=480) ? 3 : 2
even=SelectEven().ScriptClip("nf=YDifferenceToNext()"+chr(13)+"nf>=SwitchThreshold?unfilter(-(fmin(round(nf)*2,100)),-(fmin(round(nf)*2,100))):TemporalSoften(fmin(round(2/nf),6),round(1/nf),round(3/nf),0,2)")
odd=SelectOdd().ScriptClip("nf=YDifferenceToNext()"+chr(13)+"nf>=SwitchThreshold?unfilter(-(fmin(round(nf)*2,100)),-(fmin(round(nf)*2,100))):TemporalSoften(fmin(round(2/nf),6),round(1/nf),round(3/nf),0,2)")
Interleave(even,odd)
Weave()

AddBorders here
Limiter()

function fmin( int f1, int f2) {
  return ( f1<f2 ) ? f1 : f2
}

This script is meant for interlaced encodes. As I put most of my TV caps on DVD these days, I encode them as interlaced to preserve the details, sharpness and the smooth motion. The even= and odd= lines contain the whole MA portion of the script on the same line. The word wrapping just puts them on several lines.

Spatial filters affect the video more because SeparateFields() splits each frame into two fields, so 720x576 becomes 720x288. That's why I've lowered the STMedianFilter values, MergeLuma may also appear to be too strong. You may want to try replacing STMedianFilter with FluxSmooth. Just don't do any temporal filtering at that point (use FluxSmooth(-1,4) for example)!

DCTFilter can introduce some artifacts near the top and bottom of the video. This can be avoided by cropping slightly. It's probably due to the filters nature, unfortunately I can't give you any better explanations.

This is only a small sample based on Kwag's optimal script. Feel free to bash it around :twisted:

Boulder 10-15-2003 08:21 AM

Some remarks:

SwitchThreshold may need tweaking. Kwag?

kwag 10-15-2003 09:50 AM

Quote:

Originally Posted by Boulder
Some remarks:

SwitchThreshold may need tweaking. Kwag?

Looking at it now :)
Maybe make SwitchThreshold a manual constant, until you get the desired switching value :?:
Just include: ScriptClip("Subtitle(String(nf),1,30)") after your limiter function call. Then you can see the dynamic value of nf as you play your .avs in Vdub or any player, and this way, you can see the activity level and tune the SwithTreshold selection line depending on activity/resolution.

-kwag

Boulder 10-15-2003 10:00 AM

Thanks! I've often wondered how I would see the 'nf' value at each frame..never really got around to check how Subtitle works.

I'm going to capture a TV series tonight and see what I can come up with.

nicksteel 10-15-2003 10:06 AM

Boulder,

Do post your results. I do a lot of MPEG2 captures and would like to try doing interlaced.

Boulder 10-15-2003 10:14 AM

A _very_ quick test on a 720x576 VHS capture shows that the value is around 2.5 - 4.0 when there's little motion and 6.0 - 12.0 when there's motion (not intense but some).

How would I start tweaking the threshold?

*:lol: feeling like a total newbie :lol:*

kwag 10-15-2003 10:36 AM

Quote:

Originally Posted by Boulder
A _very_ quick test on a 720x576 VHS capture shows that the value is around 2.5 - 4.0 when there's little motion and 6.0 - 12.0 when there's motion (not intense but some).

How would I start tweaking the threshold?

*:lol: feeling like a total newbie :lol:*

For that resolution, I would use SwitchTreshold = 4. So only after 4, dynamic blurring starts to be applied.
Or you can expand the line: "SwitchThreshold = (Width<=352) ? 4 : (Width<=480) ? 3 : 2" to include more resolutions :idea: :D

-kwag

audioslave 10-15-2003 05:20 PM

@Boulder

How/where do you get the correct values for Crop and BicubicResize? I'm guessing you're using MovieStacker, but could you please explain where you find the values? :oops:

EDIT: Never mind the BicubicResize question, but I'm still having trouble with getting Crop to work...

Zyphon 10-15-2003 05:50 PM

Thanx 4 the script Boulder looks good. :)

audioslave 10-15-2003 06:22 PM

I simply put my Crop values from MovieStacker's Crop boxes in the script, like this: Crop(668, 428)
But I'm getting an 'invalid arguments to function "Crop"' error in Vdub.
How does this line have to look to be correct :?:

Boulder 10-16-2003 03:32 AM

Quote:

Originally Posted by audioslave
I simply put my Crop values from MovieStacker's Crop boxes in the script, like this: Crop(668, 428)
But I'm getting an 'invalid arguments to function "Crop"' error in Vdub.
How does this line have to look to be correct :?:

When everything else fails, read the docs :lol:

You'll have to state how many pixels you crop. I crop manually, using VirtualDub for help. I've never used any program to get the correct values as I trust my own eyes more.

So, the syntax is Crop(crop_width,crop_height,destination_width,dest ination_height).

Crop(10,24,700,528) would mean that you would crop 10 pixels off the left and right side and 24 pixels off the top and bottom of the clip. The source would be 720x576 in this case (720 - 10 - 10 = 700 and 576 - 24 - 24 = 528).

If you want to crop a different amount of pixels from the sides, you can use Crop(left,top,-right,-bottom).


From the Avisynth docs:

In order to preserve the data structure of the different colorspaces, the following mods should be used. You will not get an error message if they are not obeyed, but it may create strange artifacts.

In RGB:
width no restriction
height no restriction if video is progressive
height mod-2 if video is interlaced

In YUY2:
width mod-2
height no restriction if video is progressive
height mod-2 if video is interlaced

In YV12:
width mod-2
height mod-2 if video is progressive
height mod-4 if video is interlaced

Dialhot 10-16-2003 03:59 AM

Or you can use the avisynth plugin Autocrop that will give you this information visually, on the image.

nicksteel 10-16-2003 07:54 AM

Boulder.................
 
When you get your interlaced script tweaked, could you post it? I'm capturing with a PVR-250 at 720x480.

NickSteel

Boulder 10-16-2003 08:07 AM

Sure thing. I'm currently testing the script with the capture, checking some low-motion scenes and extremely high motion scenes and see how the MA script compares to a script I normally use.

Boulder 10-16-2003 09:29 AM

OK, here are some results:

With the MA script
Code:

nf=0
AVISource("path\clip.avi")
ConverttoYV12(interlaced=true)
SeparateFields()
UnDot()
BicubicResize(656,272,0,0.6)
MergeChroma(Blur(1.5))
MergeLuma(Blur(0.1)) # use only if the video won't get too blurry!
FluxSmooth(-1,7)

SwitchThreshold = (Width<=352) ? 4 : (Width<=480) ? 3 : 2
even=SelectEven().ScriptClip("nf=YDifferenceToNext()"+chr(13)+"nf>=SwitchThreshold?unfilter(-(fmin(round(nf)*2,100)),-(fmin(round(nf)*2,100))):TemporalSoften(fmin(round(2/nf),6),round(1/nf),round(3/nf),0,2)")
odd=SelectOdd().ScriptClip("nf=YDifferenceToNext()"+chr(13)+"nf>=SwitchThreshold?unfilter(-(fmin(round(nf)*2,100)),-(fmin(round(nf)*2,100))):TemporalSoften(fmin(round(2/nf),6),round(1/nf),round(3/nf),0,2)")
Weave()

AddBorders(24,16,24,16)
Limiter()
ConverttoYUY2(interlaced=true)

function fmin( int f1, int f2) {
  return ( f1<f2 ) ? f1 : f2
}

Low motion 55 755 504 bytes
High motion 32 993 620 bytes

With a static script
Code:

SeparateFields()
UnDot()
BicubicResize(656,272,0,0.6)
MergeChroma(Blur(1.5))
MergeLuma(Blur(0.1)) # use only if the video won't get too blurry!
FluxSmooth(-1,7)
even=SelectEven().TemporalCleaner(6,11)
odd=SelectOdd().TemporalCleaner(6,11)
Interleave(even,odd)
Weave()

AddBorders(24,16,24,16)
Limiter()

function fmin( int f1, int f2) {
  return ( f1<f2 ) ? f1 : f2
}

Low motion 55 206 840 bytes
High motion 35 406 512 bytes

Encoding is surprisingly slightly faster with the MA script. The result is a bit blurrier but I'm not sure if it's noticable when watched on a TV. The low motion scene was a bit larger with the MA script, but if I had used TemporalCleaner instead of TemporalSoften, I'm sure that the filesize would have been lower than with the static script.

I'm now encoding the whole clip and see how it looks.

nicksteel 10-24-2003 05:33 AM

Boulder,
 
Quote:

Originally Posted by Boulder
OK, here are some results:

With the MA script
Code:

nf=0
AVISource("path\clip.avi")
ConverttoYV12(interlaced=true)
SeparateFields()
UnDot()
BicubicResize(656,272,0,0.6)
MergeChroma(Blur(1.5))
MergeLuma(Blur(0.1)) # use only if the video won't get too blurry!
FluxSmooth(-1,7)

SwitchThreshold = (Width<=352) ? 4 : (Width<=480) ? 3 : 2
even=SelectEven().ScriptClip("nf=YDifferenceToNext()"+chr(13)+"nf>=SwitchThreshold?unfilter(-(fmin(round(nf)*2,100)),-(fmin(round(nf)*2,100))):TemporalSoften(fmin(round(2/nf),6),round(1/nf),round(3/nf),0,2)")
odd=SelectOdd().ScriptClip("nf=YDifferenceToNext()"+chr(13)+"nf>=SwitchThreshold?unfilter(-(fmin(round(nf)*2,100)),-(fmin(round(nf)*2,100))):TemporalSoften(fmin(round(2/nf),6),round(1/nf),round(3/nf),0,2)")
Weave()

AddBorders(24,16,24,16)
Limiter()
ConverttoYUY2(interlaced=true)

function fmin( int f1, int f2) {
  return ( f1<f2 ) ? f1 : f2
}

Low motion 55 755 504 bytes
High motion 32 993 620 bytes

With a static script
Code:

SeparateFields()
UnDot()
BicubicResize(656,272,0,0.6)
MergeChroma(Blur(1.5))
MergeLuma(Blur(0.1)) # use only if the video won't get too blurry!
FluxSmooth(-1,7)
even=SelectEven().TemporalCleaner(6,11)
odd=SelectOdd().TemporalCleaner(6,11)
Interleave(even,odd)
Weave()

AddBorders(24,16,24,16)
Limiter()

function fmin( int f1, int f2) {
  return ( f1<f2 ) ? f1 : f2
}

Low motion 55 206 840 bytes
High motion 35 406 512 bytes

Encoding is surprisingly slightly faster with the MA script. The result is a bit blurrier but I'm not sure if it's noticable when watched on a TV. The low motion scene was a bit larger with the MA script, but if I had used TemporalCleaner instead of TemporalSoften, I'm sure that the filesize would have been lower than with the static script.

I'm now encoding the whole clip and see how it looks.

I'm going to capture a marathon run of several Planet of the Apes films with my PVR250 as 720x480 MPEG2's. I would like trying to keep interlaced and encode as 704x480 or 352x480 MPEG2 with SKVCD.

I plan to use DVD2AVI to make d2v and create trim statements with VDUB. I will not create AVI files, but go MPEG2 to MPEG2.

Should I use the first script EXACTLY as listed above (with d2v instead of avi), or would you recommend something else?

J-Wo 10-24-2003 12:30 PM

ooooooohhhhhhhhhhh I am SOOOO confused about the cropping and resizing and addbording bit! I don't think I can handle the "use my eyes" idea because I just don't understand how to change the parameters correctly. If I use Boulder's bicubicresize paraemters and tweak it slightly to be for 704x480 (as opposed to 704x576), it looks like his addborder line is resizing the image too much (i.e. I can see some black bars on the sides).

Is there not an easy way to do this? I rely on Moviestacker so much!!! :oops: I also give AutoCrop a try, which found out that my source has some black bars on the sides. So it recommends a cropping of Crop(8,0,708,480). This just throws everything off for me, so now I have completely NO idea what to do for BiCubicResize or AddBorder. Hoping someone out there can help me!

J-Wo 10-24-2003 10:45 PM

Okay, just so I don't sound like a complete buffoon. I finally figured out how to use any resize values obtained from Moviestacker and make them work with Boulder's script. Whether you choose Overlap or Resize for "Blocks TV-Overscan", take whatever values for AddBorders() and/or Letterbox() and put them at the end of your script. Choose Bicubic precise as your resize method, but you'll have to make some modifications to the figures given. In my example, I have a 740x480 source that I'm encoding at 704x480. So according to Moviestacker, I get:
Code:

BicubicResize(704, 352, 0, 0.6, 0, 0, 720, 480)
AddBorders(0, 64, 0, 64)
LetterBox(0, 0, 16, 16)

But in Boulder's original script he mentioned BicubicResize(enter your values here, height/2!). So I halved 352 and got 176. I also noticed that Boulder doesn't include the last four values for resizing, so the final line becomes:
Code:

BicubicResize(704, 176, 0, 0.6)
AddBorders(0, 64, 0, 64)
LetterBox(0, 0, 16, 16)

In this example, I'm using overscan = 2. If I use resize = 2 then the lines become:
Code:

BicubicResize(672, 168, 0, 0.6)
AddBorders(16, 72, 16, 72)

I hope this explanation might be able to help others. I also found Boulder's script to do an amazing job on my NTSC interlaced dvd source, which was quite grainy/pixelated/noisy in certain scenes. Permutations of the MA script simply never got it right, and this one did the trick. It also lead to almost the same compression, but was way faster. Thanks a load Boulder!

nicksteel 10-25-2003 06:36 AM

When I use MovieStacker v2.0.0 (beta3) for a PVR250 capture of Futurama at 720x480 to produce a KSVCD at 480x480:

MPEG Resizing

Source 720x480 DVD Pal(Unchecked) Anamorphic(Unchecked) ITU=R BT.601-4(Checked)

Film pixel 720x480 0 left border 0 top border

Crop 720x480 accurate Use GripFit(crop/resize)(Unchecked)

Resize 336x446

Destination 480x480 SVCD Anamorphic(Checked) Format conversion(Unchecked)

Blocks TV-Overscan 2 Resize

AviSynth Script

Bicubic precise

LoadPlugin("C:\video\moviestacker\Filters\MPEG2Dec .dll")

Mpeg2Source("H:\fut16\fut16.d2v")
BicubicResize(336, 446, 0, 0.6, 0, 1, 720, 478)
AddBorders(72, 17, 72, 17)

If I half 446 to 223, will not run.

BicubicResize(336, 223, 0, 0.6)

What should BicubicResize line look like for this?

My capture is already YV12. Should I change script in any way?

J-Wo 10-25-2003 09:40 AM

Are you using AviSynth 2.5x? I think Boulder's script requires it.

nicksteel 10-25-2003 10:37 AM

Using AviSynth 2.5x.

incredible 10-25-2003 10:45 AM

If you captured your source by using 720x480 it doesn't seem that "ITU=R BT.601-4" should be checked in Moviestacker.

Boulder 10-25-2003 03:25 PM

The ITU thingie is something you must check. Some cards do horizontal scaling like my Hauppauge WinTV Theatre does, so it doesn't matter if I use a horizontal resolution of 704, 720 or 768. The result is always scaled and the capture itself doesn't have any overscan borders.

If your capture is already in YV12 (all MPEGs for example), there's naturally no need to do the conversion.

Nick, that AddBorders line looks really odd. You're adding 72 pixels to left and right and 17 pixels to top and bottom? (EDIT: Nevermind..you're doing anamorphic but why? Your source is not anamorphic)

Boulder 10-25-2003 03:33 PM

And your script might look like this:

Mpeg2Source("H:\fut16\fut16.d2v")
EDIT:BicubicResize(448,224)
AddBorders(16,16,16,16)

Don't crop an odd number of pixels vertically if the source is interlaced! See the post earlier in this thread for correct cropping parameters.

nicksteel 10-27-2003 08:13 PM

Boulder,
 
The decomb500 tutorial gives Telecide() as:

Telecide(order=1,guide=1,post=4,vthresh=24)

MovieStacker gives Telecide() as:

Telecide(guide=0, gthresh=30, post=true, threshold=15, dthreshold=9, blend=true, show=false, agg=false, reverse=false, firstlast=false, chroma=false, nt=0, mm=0)

I now understand how Telecide() works using decomb500. Need to know how to use information in regular Telecide() function.

ozjeff99 10-28-2003 08:49 AM

Hi Boulder. When docs say:

height mod-4 if video is interlaced

I gather that means a multiple of 4?

Excuse my ignorance.
Regards
ozjeff99

Boulder 10-28-2003 09:00 AM

Quote:

Originally Posted by ozjeff99
Hi Boulder. When docs say:

height mod-4 if video is interlaced

I gather that means a multiple of 4?

Excuse my ignorance.
Regards
ozjeff99

Yep, that's what it means. I don't know why they've used mod-4 as it would have been clearer to state "height must be a multiple of 4".

digitall.doc 11-13-2003 04:55 AM

I like to keep my films intelaced for making mpeg2 and SKVCD.
I've been using the "interlaced script" that you proposed in this thread, and I'm happy with the results.
I read in "another" forum that it was more recommended, to spatial and temporal filters, to do it this way:
Code:

BicubicResize(_parameters_)
#ComplementaryFields() #I comment this line cos my videos are bottom field first
Bob(0,0.5)
#Spatial filters
#Temporal filters
SeparateFields()
SelectEvery(4,1,2)
Weave()

The visual result in the PC is similar to our script, but I have to test it in TV. The final size I think that gets bigger (I have to compare them).
What's your opinion about this suggestion?. What are they doing?, I know very little about filtering, but I think that they separate the frames in two fields at total resolution, apply the filters, and then separate fields and select two to weave in a frame. Isn't it?.
Do you think it's a better way to keep source interlaced and apply filters?, or just good for some special situations?.
I'm very interested in your feedback 8O .

GFR 11-13-2003 06:11 AM

The Bob() doubles the frame rate, so you don't need to worry about temporal and spatial-temporal filters.

The
SeparateFields()
SelectEvery(4,1,2)
Weave()

Throws away the extra frames so it's back at the original frame rate and then makes it interlaced again.

This method is good quality (maybe marginally better?), but it will take (much) longer.

Boulder 11-13-2003 06:22 AM

This is also a very good way to process interlaced material, keeping it interlaced. You'll need the ViewFields and UnViewFields plugins by Simon Walters. This is the script I used with "The Trouble With Harry".

Code:

MPEG2Source("c:\temp\dvd-rip\the trouble with harry\harry.d2v",idct=5)
Crop(0,16,-4,-12)
ViewFields()
UnDot()
BicubicResize(672,384)
MergeChroma(Blur(1.5))
Convolution3d(2,6,10,8,8,3,0)
Blockbuster(method="noise",variance=0.3,seed=4888,block_size=3)
DCTFilter(1,1,1,1,1,0.75,0.5,0)
UnViewFields()
AddBorders(16,96,16,96)
ConvertToYUY2(interlaced=true) # CCE wants YUY2
Limiter()

ViewFields puts the top field on the top and the bottom field on the bottom of the frame. This allows proper spatial and temporal filtering. In theory it could blur the edge areas between the top and bottom section (that is, the area in the middle of the frame) but you won't notice anything when the fields are put back to their original position.

The good thing is that no filter needs to be called twice. Some filters just don't like that and may produce strange results. I haven't done any speed tests but I suspect that this is faster than SelectEven+SelectOdd+Interleave. It is definitely a lot faster than using Bob.

nicksteel 11-13-2003 08:31 AM

Quote:

Originally Posted by Boulder
This is also a very good way to process interlaced material, keeping it interlaced. You'll need the ViewFields and UnViewFields plugins by Simon Walters. This is the script I used with "The Trouble With Harry".

Code:

MPEG2Source("c:\temp\dvd-rip\the trouble with harry\harry.d2v",idct=5)
Crop(0,16,-4,-12)
ViewFields()
UnDot()
BicubicResize(672,384)
MergeChroma(Blur(1.5))
Convolution3d(2,6,10,8,8,3,0)
Blockbuster(method="noise",variance=0.3,seed=4888,block_size=3)
DCTFilter(1,1,1,1,1,0.75,0.5,0)
UnViewFields()
AddBorders(16,96,16,96)
ConvertToYUY2(interlaced=true) # CCE wants YUY2
Limiter()

ViewFields puts the top field on the top and the bottom field on the bottom of the frame. This allows proper spatial and temporal filtering. In theory it could blur the edge areas between the top and bottom section (that is, the area in the middle of the frame) but you won't notice anything when the fields are put back to their original position.

The good thing is that no filter needs to be called twice. Some filters just don't like that and may produce strange results. I haven't done any speed tests but I suspect that this is faster than SelectEven+SelectOdd+Interleave. It is definitely a lot faster than using Bob.

My captures are YV12 and top frame first. How would I use?

incredible 11-13-2003 08:43 AM

Nic, ....

as you see above, the script handles a DVD-Rip - .d2v and in this CASE this means YV12!

So :arrow: he also uses a YV12 source ... like you do when handling your mpeg2 captures ... :idea:

Here's the mpeg2dec3 "readme" doc, have a look:
http://home.earthlink.net/~teluial/a...MPEG2Dec3.html

nicksteel 11-13-2003 08:50 AM

Quote:

Originally Posted by incredible
Nic, ....

as you see above, the script handles a DVD-Rip - .d2v and in this CASE this means YV12!

So :arrow: he also uses a YV12 source ... like you do when handling your mpeg2 captures ... :idea:

ViewFields puts the top field on the top and the bottom field on the bottom of the frame. This allows proper spatial and temporal filtering.

My captures are top field first. Does ViewFields assume field order or is it unimportant?

MPEG2Source("c:\temp\dvd-rip\the trouble with harry\harry.d2v",idct=5)

What is "idct=5"?


Also I assume that since I use TMPGEnc, I don't need the ConverttoYUY2 line.

Boulder 11-13-2003 09:30 AM

The field order shouldn't matter. Just try with only MPEG2Source and ViewFields lines in your script and you'll see what ViewFields does.

IDCT=5 means that MPEG2DEC3.dll uses SSE2 instructions to decode the video (I have a P4). I don't know if it's any faster than the default IDCT=2 though :wink:

I would definitely do the conversion in Avisynth when dealing with interlaced sources; I don't trust any external codecs doing the job for me. With an YUY2 source conversion by a codec should work if you have HuffYUV doing the job, but since HuffYUV doesn't support YV12, I really don't know which codec would do the conversion YV12->RGB24. Maybe you should try ConvertToRGB24(interlaced=true) instead of that ConvertToYUY2(interlaced=true) line in my script.

nicksteel 11-13-2003 09:41 AM

ConvertToYUY2(interlaced=true) # CCE wants YUY2

Pardon my confusion, but I capture in YV12 and don't use CCE. Do I still need to convert? I have used ConvertToYUY2(interlaced=true) in the past, when I was capturing with Huffy avi.

Boulder 11-13-2003 10:29 AM

As I said in my answer, you can use ConverttoRGB24(interlaced=true) instead of ConverttoYUY2(interlaced=true) if you use TMPGEnc :wink:

cweb 03-30-2004 03:21 PM

Should this interlaced ma script use 'motion estimate search(fast)' in Tmpgenc, like the ma script?
Also, when the avs file is opened in Tmpenc, do you need to choose 'INTERLACED' source (with deinterlaced or progressive sources you have to change this usually to not interlaced or whatever the option is called) ?

incredible 03-30-2004 03:25 PM

Do make a sample-encoding on maybe 5 Mins incl much variance in frames treatment etc. and do decide on your own as your eyes won't lie :wink:

Peter1234 03-30-2004 08:53 PM

This seems to work for me, it is also 30% faster than encoding without separating fields.

### for interlaced source only

LoadPlugin("C:\Filters25\undot.dll")
LoadPlugin("C:\Filters25\STMedianFilter.dll")
LoadPlugin("C:\Filters25\Convolution3DYV12.dll")

AVISource("C:\Documents and Settings\user\Desktop\DVtype2.avi")
ConvertToYV12
Levels(0,0.94,255,0,255)

SeparateFields()
odd=SelectOdd().Kwag_MA1()
even=SelectEven().Kwag_MA2()
Interleave(even,odd)
Weave()

Function Kwag_MA1 (clip input1) {
undot(input1)
Limiter(input1)
STMedianFilter(input1,3, 3, 1, 1 )
MergeChroma(input1,blur(1.5))
MergeLuma(input1,blur(0.1))
ScriptClip(" nf = YDifferenceToNext()" +chr(13)+
\ "unfilter( -(fmin(round(nf)*2, 100)), -(fmin(round(nf)*2,
\ 100)) ).TemporalSoften( fmin( round(2/nf), 6),
\ round(1/nf) , round(3/nf) , 1, 1) ")
Convolution3D(input1,preset="movieLQ")
Limiter(input1)}

Function Kwag_MA2 (clip input2) {
undot(input2)
Limiter(input2)
STMedianFilter(input2,3, 3, 1, 1 )
MergeChroma(input2,blur(1.5))
MergeLuma(input2,blur(0.1))
ScriptClip(" nf = YDifferenceToNext()" +chr(13)+
\ "unfilter( -(fmin(round(nf)*2, 100)), -(fmin(round(nf)*2,
\ 100)) ).TemporalSoften( fmin( round(2/nf), 6),
\ round(1/nf) , round(3/nf) , 1, 1) ")
Convolution3D(input2,preset="movieLQ")
Limiter(input2)}

Function fmin( int f1, int f2) {return ( f1<f2 ) ? f1 : f2}

Dialhot 03-30-2004 09:02 PM

FYI you can use only one function as the content is the same. The actual value of the clip paramater will be instanciated to "even" or "odd" according to whgich line calls the function.

Note: you added a C3D("motionLQ") in the end of the normal MA script ? And you use that on divx ???
Woaow... you really like blurred pictures :-)


All times are GMT -5. The time now is 11:51 AM  —  vBulletin © Jelsoft Enterprises Ltd

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.