The problem probably was due to the recording not having too much 'action' scenes...
Well this was the MA-based script I had used at the time.. sorry for posting it late but I missed your reply! I did have lots of comments but I removed them for clarity. I do use some other things like undot() and dctfilter().
Yes, I usually don't use ReadAVS() - I did find ReadAvs() to be slower than directshow actually with static encodes (I haven't tried the MA or this script with ReadAVS() however, so perhaps I should try that...).
## Functions ###
function fmin( int f1, int f2) {
return ( f1<f2 ) ? f1 : f2
}
video=mpeg2source("source.d2v")
video=FieldDeinterlace(video)
video=undot(video)
# left,top,-right,-bottom (or width,height)
video=Crop(video,0,4,-4,-0)
#KVCDx3 resize:
video=BicubicResize(video,512,512).undot()
video=STMedianFilter(video,3,3,1,1)
video=MergeChroma(video,blur(video,1.5))
video=MergeLuma(video,blur(video,0.1))
#with the motion adaptative script use Motion Estimation instead of High Quality
## Linear Motion Adaptive Filtering ##
#
# ( Portions from AviSynth's manual )
# This will apply variable temporalsoften
# and variable blur.
# Both filters are active at all times, and work inversely proportional to the
# activity, measured from current frame to next frame.
video=ScriptClip(video,"nf=YDifferenceToNext()"+ch r(13)+"unfilter(-(fmin(round(nf)*2,100)),-(fmin(round(nf)*2,100)) ).TemporalSoften(fmin(round(2/nf),6),round(1/nf),round(3/nf),1,1)")
video=DCTFilter(video,1,1,1,1,1,1,.5,0)
video=Undot(video).AddBorders(16,32,16,32) # KVCDx3
video=Limiter(video)
video=YV12toRGB24(video,interlaced=false)
video=FlipVertical(video) #YV12->BGR24 convertion natively flips image
video
|