digitalFAQ.com Forums [Archives]

digitalFAQ.com Forums [Archives] (http://www.digitalfaq.com/archives/)
-   Avisynth Scripting (http://www.digitalfaq.com/archives/avisynth/)
-   -   Avisynth: Motion adaptive filtering now possible? (http://www.digitalfaq.com/archives/avisynth/3594-avisynth-motion-adaptive.html)

Jellygoose 05-17-2003 03:57 AM

So when you can load the 2.0x filters into avisynth 2.5x with that loadpluginex.dll , how come Sampler.dll doesn't work with AviSynth2.5x ?
:roll:

Boulder 05-17-2003 05:45 AM

Sampler does work in AVS2.5.1 and should work since v0.2. I use it all the time, just remember to use sampler-2.5.dll instead of sampler.dll . They're both in the same package.

sbin 05-17-2003 09:48 PM

Well, after spending an entire Saturday trying to modify this script I have accomplished precisely nothing, except learning a whole lot about how NOT to write a script. :roll: Not being a programmer type, I'm afraid my scripting skills just aren't up to the task. So I will throw a few random brain droppings out for you all, and maybe somebody with more scripting ability can make some of it work.

The first thing I noticed when I added the subtitle function in Sagittaire's latest script that he posted was how very little of the movie was being classified as high motion. I'm using a James Bond movie as my sample, and there were ski chases, car chases, fistfights, and a climactic shootout that never really got above medium.

That led me to believe the threshold values need to be tweaked a little. I reduced threshold_hm to 10, and that bought me about 100k off my file size. But then I started noticing some scenes where there was a sharp threshold cutoff and there was a noticeable shift in the luma blurring.

So then I started thinking along the lines of what kwag mentioned that maybe we need more than 3 brackets to give a better range of values that will both compress better and make the threshold transitions smoother. Like maybe 5 brackets. I spent most of the day today trying to add two new brackets for "VERY LOW" and "VERY HIGH" and set the thresholds at 3, 6, 9, 12 and 15. But somehow I just can't get both the logic and the syntax right.

Along the way, my Detect_Motion function got quite long and cumbersome, so I started trying to find a way to simplify it. As I said, I'm not a programmer type, so this may be totally off the wall, but..... I started trying to create a function that would perform the operation
Code:

output1 = Conditionalfilter( Courant_fr, Low, Medium, "YDifferenceFromPrevious() + UDifferenceFromPrevious() + VDifferenceFromPrevious()", "<", "threshold_sm", false)
output2 = Conditionalfilter( Next_fr, output1, Medium, "YDifferenceFromPrevious() + UDifferenceFromPrevious() + VDifferenceFromPrevious()", "<", "threshold_sm", false)

only once and then store the resulting value as a variable which can then be used in the ConditionalFilter statements. Much cleaner to my mind, but I couldn't get it to work. :(

But studying up on that led me to the ConditionalFilter docs at AVISynth.org:

http://www.avisynth.org/index.php?pa...ditionalFilter

I discovered that if you add the line

Code:

ScriptClip(clip, "Subtitle(String(YDifferenceFromPrevious() + UDifferenceFromPrevious() + VDifferenceFromPrevious()))")
to the script, you will get basically a numeric representation of the motion in the frame printed to on the output. FrameEvaluate performs the same function but the output if ignored and can be used for setting variables.

So here's where my thought is right now: Can we use FrameEvaluate to simply analyze the frame for motion without doing anything to it, and then put that result into a variable? I'm wondering if that could be the first step toward creating kwag's dynamic range of values.....

Ideas? Or do I need to go back to my day job? :lol:

kwag 05-17-2003 10:41 PM

Quote:

Originally Posted by sbin
But then I started noticing some scenes where there was a sharp threshold cutoff and there was a noticeable shift in the luma blurring.

That's exactly what I suspected would happen :mrgreen:
That's why I suggested the filter to be linear, instead of being in three ranges, where a sharp cut-off/turn-on WILL happen the way the script is currently set. 8)
Quote:


So then I started thinking along the lines of what kwag mentioned that maybe we need more than 3 brackets to give a better range of values that will both compress better and make the threshold transitions smoother. Like maybe 5 brackets.
The more the brackets, the better, but that still won't solve the sharp drastic filter changes at boundaries :!:
Quote:

I spent most of the day today trying to add two new brackets for "VERY LOW" and "VERY HIGH" and set the thresholds at 3, 6, 9, 12 and 15. But somehow I just can't get both the logic and the syntax right.
It's better to integrate this into a filter, instead of scripting function+function... etc.
Quote:


Along the way, my Detect_Motion function got quite long and cumbersome, so I started trying to find a way to simplify it. As I said, I'm not a programmer type, so this may be totally off the wall, but..... I started trying to create a function that would perform the operation
Code:

output1 = Conditionalfilter( Courant_fr, Low, Medium, "YDifferenceFromPrevious() + UDifferenceFromPrevious() + VDifferenceFromPrevious()", "<", "threshold_sm", false)
output2 = Conditionalfilter( Next_fr, output1, Medium, "YDifferenceFromPrevious() + UDifferenceFromPrevious() + VDifferenceFromPrevious()", "<", "threshold_sm", false)

only once and then store the resulting value as a variable which can then be used in the ConditionalFilter statements. Much cleaner to my mind, but I couldn't get it to work. :(
Don't worry, it happens to all of us :) And specially with spagetti code in scripts, like avisynth scripting :lol:
Quote:


But studying up on that led me to the ConditionalFilter docs at AVISynth.org:

http://www.avisynth.org/index.php?pa...ditionalFilter

I discovered that if you add the line

Code:

ScriptClip(clip, "Subtitle(String(YDifferenceFromPrevious() + UDifferenceFromPrevious() + VDifferenceFromPrevious()))")
to the script, you will get basically a numeric representation of the motion in the frame printed to on the output. FrameEvaluate performs the same function but the output if ignored and can be used for setting variables.

So here's where my thought is right now: Can we use FrameEvaluate to simply analyze the frame for motion without doing anything to it, and then put that result into a variable?
That's exactly what I proposed, but still, it would be more elegant to use a special avisynth motion engine filter for the detection, and "call" the external filter you want "attached".
Quote:

I'm wondering if that could be the first step toward creating kwag's dynamic range of values.....

Ideas? Or do I need to go back to my day job? :lol:
You should have seen how the file size prediction started off :) It's come a Looong way since 8)
Stick along, I'm pretty sure that in the next ~30 days, this will convert to a whole new monster :mrgreen:

-kwag

sbin 05-17-2003 11:05 PM

Quote:

You should have seen how the file size prediction started off
Actually, I did see. :lol: I've been here lurking and learning for a long time... through all the CQ/CQ_VBR business, file prediction, and way back before. Just never had anything to post about before. :mrgreen:

kwag 05-17-2003 11:19 PM

Quote:

Originally Posted by sbin
Just never had anything to post about before. :mrgreen:

Well, you started this thread (and also was your first post!), and it looks like it's going to be a long and GOOD topic :wink:

-kwag

sh0dan 05-18-2003 04:58 AM

Quote:

Originally Posted by sbin
only once and then store the resulting value as a variable which can then be used in the ConditionalFilter statements. Much cleaner to my mind, but I couldn't get it to work. :(

As you propose, frameEvaluate is the function to use for this. The tricky part is that you need to assign the values AFTER you actually need them. Like this:
Code:

function Detect_Motion( clip detect, clip Slow, clip Medium, clip Hight, float threshold_sm, float threshold_hm)

{

global Courant_fr = detect
global Next_fr = detect.trim(1,0)

output1 = Conditionalfilter( Courant_fr, Slow, Medium, "diff", "<", "threshold_sm", false)

output2 = Conditionalfilter( Next_fr, output1, Medium, "diff", "<", "threshold_sm", false)

output3 = Conditionalfilter( Courant_fr, Hight, output2, "diff", ">", "threshold_hm", false)

output4 = Conditionalfilter( Next_fr, output3, output2, "diff", ">", "threshold_hm", false)

output4 = frameevaluate(output4,"diff = YDifferenceFromPrevious(Courant_fr) + UDifferenceFromPrevious(Courant_fr) + VDifferenceFromPrevious(Courant_fr)")

return output4

}

Thinking about it, there might be a way of making this more logical, where you assign variables BEFORE they are used. ;)

Sagittaire 05-19-2003 07:32 AM

Quote:

################################################## #################################

# AviSynth 2.51 RC3 #

# Script Motion Detection Filter YV12 #

################################################## #################################


#################################### Faq ...;-) ###################################


# MPEG2Dec3.dll #
# Convolution3DYV12.dll #
# FluxSmooth-2.5.dll #

# Source : Path projet .d2v of DVD2AVI 1.76 #
# CPU_type : you will choose 5 for Pentium IV and 2 for the other processors #

# Threshold : adjustment of the threshold of detection of the scenes #

# Top : Crop top of the image #
# Left : Crop left of the image #
# Right : Crop right of the image #
# Bottom : Crop bottom of the image #

# DimX : Width of the image #
# DimY : Height of the image #

# Start : Start Frame #
# End : End Frame #



#################################### Variables ####################################


Source = "C:\Stock\azerty.d2v"
CPU_type = 2

threshold_sm = 5
threshold_hm = 15

Top = 76
Left = 16
Right = 16
Bottom = 74

DimX = 640
DimY = 272

Start = 0
End = 0



################################# Script Principal ################################


clip = Mpeg2Source( Source, idct = CPU_type)
clip = Trim( clip, Start, End)
clip = Crop( clip, Left, Top, -Right, -Bottom)
clip = Filter_Motion( clip, DimX, DimY, threshold_sm, threshold_hm)
clip = FluxSmooth( clip, 5, 3)
Return clip



#################################### Fonctions ####################################


# Motion_Hight : function of filtering of fast scenes #

function Motion_Hight( clip Hight, float X, float Y)

{

Hight = Convolution3D( Hight, 0, 8, 12, 8, 12, 3, 0)
Hight = BicubicResize( Hight, X, Y, 0.33, 0.33)
Return Hight.subtitle("hight")

}



# Motion_Medium : function of filtering of Mediums scenes #

function Motion_Medium( clip Medium, float X, float Y)

{

Medium = Convolution3D( Medium, 0, 4, 6, 4, 6, 2.75, 0)
Medium = BicubicResize( Medium, X, Y, 0, 0.5)
Return Medium.subtitle("medium")

}



# Motion_Slow : function of filtering of Slow scenes #

function Motion_Slow( clip Slow, float X, float Y)

{

Slow = Convolution3D( Slow, 0, 2, 3, 2, 3, 2.5, 0)
Slow = BicubicResize( Slow, X, Y, 0, 0.7)
Return Slow.subtitle("slow")

}



# Detect_Motion : function of detection of scenes slow, medium and rapid #

function Detect_Motion( clip detect, clip Slow, clip Medium, clip Hight, float threshold_sm, float threshold_hm)

{

global Courant_fr = detect

output1 = Conditionalfilter( Courant_fr, Slow, Medium, "diff_Previous", "<", "threshold_sm", false)

output2 = Conditionalfilter( Courant_fr, output1, Medium, "diff_Next", "<", "threshold_sm", true)

output3 = Conditionalfilter( Courant_fr, Hight, output2, "diff_Previous", ">", "threshold_hm", false)

output4 = Conditionalfilter( Courant_fr, output3, output2, "diff_Next", ">", "threshold_hm", true)

output4 = frameevaluate( output4, "diff_Previous = YDifferenceFromPrevious( Courant_fr) + UDifferenceFromPrevious( Courant_fr) + VDifferenceFromPrevious( Courant_fr)")

output4 = frameevaluate( output4, "diff_Next = YDifferenceToNext( Courant_fr) + UDifferenceToNext( Courant_fr) + VDifferenceToNext( Courant_fr)")


return output4

}



# Filter_Motion : function of filtage of scenes slow, medium and rapid #

function Filter_Motion( clip filter, float X, float Y, float threshold_sm, float threshold_hm)

{

Slow = Motion_Slow( filter, X, Y)
Medium = Motion_Medium( filter, X, Y)
Hight = Motion_Hight( filter, X, Y)
output = Detect_Motion( filter, Slow, Medium, Hight, threshold_sm, threshold_hm)
return output

}

################################################## #################################

Jellygoose 05-27-2003 09:06 AM

What happened to this one? Is anyone still working on it? it sounds so good, that I'm thinking about trying it out...
How come nobody seems to care about this thread anymore?

ARnet_tenRA 05-27-2003 12:34 PM

Here you go Kwag. This is a new script that does linear filtering.

Code:

Version().ConvertToYV12().FadeIn(150).FadeOut(30)

LowMotionFilters=Blur(.2)
HighMotionFilters=Blur(.5)

ScriptClip("MergeLuma(LowMotionFilters,HighMotionFilters,(YDifferenceFromPrevious()+UDifferenceFromPrevious()+VDifferenceFromPrevious())/3)")

The Version()... line is just to create some test video. Try to place greatly different values for the LowMo and HighMo variables and you can see what it is doing.

Regards, Tenra

kwag 05-27-2003 02:32 PM

Hi Tenra,

Thanks for the script :)
After looking at AviSynth's reference manual and your script, I've come up with a single line solution to the problem :!:

ScriptClip("val=YDifferenceFromPrevious()/14.55" + "val > MaxThreshold ? MergeLuma(blur(MaxThreshold)) : MergeLuma(blur( val ))")

So now a script will look something like this:


Code:

LoadPlugin("C:\Filters25\MPEG2Dec3.dll")
LoadPlugin("C:\Filters25\STMedianFilter.dll")
LoadPlugin("C:\Filters25\UnFilter.dll")

Mpeg2Source("K:\DVDbot\THE_BOURNE_IDENTITY\VIDEO_TS\bourne.d2v")

MaxThreshold=1.58 # Define max value for your filter. 1.58 here, because that's the maximum value for the merge filters.

UnFilter(50, 50)
BicubicResize(528, 480, 0, 0.6, 8, 0, 704, 480)
STMedianFilter(8, 32, 0, 0 )
##TemporalSmoother(1, 1)
mergechroma(blur(1.50))
ScriptClip("val=YDifferenceFromPrevious()/14.55" + "val > MaxThreshold ? MergeLuma(blur(MaxThreshold)) : MergeLuma(blur( val ))")
LetterBox(16, 16, 16, 16)
Limiter()

The way it works is very simple. The "val" returned by the function YDifferenceFromPrevious, fluctuates between 0 to +-25, so we divide the value returned by 12.55, so that we get a range of around 0 to 1.58 (the max value for merge!) And that's it :D
Now we have a linear value applied to mergeluma, depending on activity from the previous frame. It works FLAWLESSLY :mrgreen:
Because we now have a dynamic range, we can use a value from 0 to 1.58 and the file size difference is HUGE, compared to using a static 0.2 value for luma. I've set the function so that it also protects the value not going above the max value allowed. ( I just love the "?" C's conditional operator :wink: )

-kwag

Jellygoose 05-27-2003 02:39 PM

That sounds GREAT!! I'll try the script out tomorrow... FANTASTIC!!
How would you limit the max value to let's say 0.8... I wouldn't go any further than that on the LumaBlur... :wink:

So this is only for AviSynth 2.51 right? :?

By the way, TemporalSmoother is not available for AviSynth 2.51... So we gotta find something different there!

kwag 05-27-2003 02:46 PM

Quote:

Originally Posted by Jellygoose
That sounds GREAT!! I'll try the script out tomorrow... FANTASTIC!!
How would you limit the max value to let's say 0.8... I wouldn't go any further than that on the LumaBlur... :wink:

Yes yo do want to go further :!: :), because it will blur on high and fast motion only, which you can't see details anyway :mrgreen:
So we can take advantage of this. If you still want to limit, just change the line "MaxThreshold=1.58" to 0.8 :wink:
Quote:


So this is only for AviSynth 2.51 right? :?
Yep :!:
Quote:


By the way, TemporalSmoother is not available for AviSynth 2.51... So we gotta find something different there!
Working on it :)

-kwag

kwag 05-27-2003 02:56 PM

If you want to see the value being applied dynamically to mergeluma, add the line in bold:

ScriptClip("val=YDifferenceFromPrevious()/14.55" + "val > MaxThreshold ? MergeLuma(blur(MaxThreshold)) : MergeLuma(blur( val ))")
ScriptClip("Subtitle(String(val))")

Of course, the max value will be 1.58, even if it shows spikes above that value!.
Now open your .avs in VirtualDub and have some fun :)

-kwag

ovg64 05-27-2003 03:01 PM

Is file prediction posible with this new script :?: :?

kwag 05-27-2003 03:07 PM

Quote:

Originally Posted by ovg64
Is file prediction posible with this new script :?: :?

Yes, but not with ToK :cry:

Add these lines to the bottom of your script:

interval = round((FrameCount/24)/60) # Interval spacing
nFrames = 24 # Frames per sample
SelectRangeEvery( (round(framecount/interval)),nFrames)


That's for the full sampler, and for NTSC FILM. PAL people, change the (24) to (25).
If encoding a 29.97fps NTSC movie, change it to (30)

-kwag

ovg64 05-27-2003 03:13 PM

Then i guess is back to work for Hedix & Muaddib. :D

kwag 05-27-2003 03:48 PM

Here's the fixed prediction lines for 10% : 100%:

Code:

interval = round((FrameCount/24)/60) # Full sample.
##interval = round( ((FrameCount/24) / 60) / 10 ) # 10% of sample.
nFrames = 24 # Frames per sample
SelectRangeEvery( (round(framecount/interval)),nFrames)

-kwag

ovg64 05-27-2003 04:23 PM

Quote:

Originally Posted by Jellygoose
By the way, TemporalSmoother is not available for AviSynth 2.51... So we gotta find something different there!

How about atc by Marc FD: http://ziquash.chez.tiscali.fr/ :?: :idea:

kwag 05-27-2003 04:29 PM

Here's a small clip, showing (as a numeric subtitle) the value being applied dynamically to each frame by the function :wink:
www.kvcd.net/dynamic-adapt.mpg

-kwag


All times are GMT -5. The time now is 12:29 PM  —  vBulletin © Jelsoft Enterprises Ltd

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.