Using AviSynth 2.5x.
|
If you captured your source by using 720x480 it doesn't seem that "ITU=R BT.601-4" should be checked in Moviestacker.
|
The ITU thingie is something you must check. Some cards do horizontal scaling like my Hauppauge WinTV Theatre does, so it doesn't matter if I use a horizontal resolution of 704, 720 or 768. The result is always scaled and the capture itself doesn't have any overscan borders.
If your capture is already in YV12 (all MPEGs for example), there's naturally no need to do the conversion. Nick, that AddBorders line looks really odd. You're adding 72 pixels to left and right and 17 pixels to top and bottom? (EDIT: Nevermind..you're doing anamorphic but why? Your source is not anamorphic) |
And your script might look like this:
Mpeg2Source("H:\fut16\fut16.d2v") EDIT:BicubicResize(448,224) AddBorders(16,16,16,16) Don't crop an odd number of pixels vertically if the source is interlaced! See the post earlier in this thread for correct cropping parameters. |
Boulder,
The decomb500 tutorial gives Telecide() as:
Telecide(order=1,guide=1,post=4,vthresh=24) MovieStacker gives Telecide() as: Telecide(guide=0, gthresh=30, post=true, threshold=15, dthreshold=9, blend=true, show=false, agg=false, reverse=false, firstlast=false, chroma=false, nt=0, mm=0) I now understand how Telecide() works using decomb500. Need to know how to use information in regular Telecide() function. |
Hi Boulder. When docs say:
height mod-4 if video is interlaced I gather that means a multiple of 4? Excuse my ignorance. Regards ozjeff99 |
Quote:
|
I like to keep my films intelaced for making mpeg2 and SKVCD.
I've been using the "interlaced script" that you proposed in this thread, and I'm happy with the results. I read in "another" forum that it was more recommended, to spatial and temporal filters, to do it this way: Code:
BicubicResize(_parameters_) What's your opinion about this suggestion?. What are they doing?, I know very little about filtering, but I think that they separate the frames in two fields at total resolution, apply the filters, and then separate fields and select two to weave in a frame. Isn't it?. Do you think it's a better way to keep source interlaced and apply filters?, or just good for some special situations?. I'm very interested in your feedback 8O . |
The Bob() doubles the frame rate, so you don't need to worry about temporal and spatial-temporal filters.
The SeparateFields() SelectEvery(4,1,2) Weave() Throws away the extra frames so it's back at the original frame rate and then makes it interlaced again. This method is good quality (maybe marginally better?), but it will take (much) longer. |
This is also a very good way to process interlaced material, keeping it interlaced. You'll need the ViewFields and UnViewFields plugins by Simon Walters. This is the script I used with "The Trouble With Harry".
Code:
MPEG2Source("c:\temp\dvd-rip\the trouble with harry\harry.d2v",idct=5) The good thing is that no filter needs to be called twice. Some filters just don't like that and may produce strange results. I haven't done any speed tests but I suspect that this is faster than SelectEven+SelectOdd+Interleave. It is definitely a lot faster than using Bob. |
Quote:
|
Nic, ....
as you see above, the script handles a DVD-Rip - .d2v and in this CASE this means YV12! So :arrow: he also uses a YV12 source ... like you do when handling your mpeg2 captures ... :idea: Here's the mpeg2dec3 "readme" doc, have a look: http://home.earthlink.net/~teluial/a...MPEG2Dec3.html |
Quote:
My captures are top field first. Does ViewFields assume field order or is it unimportant? MPEG2Source("c:\temp\dvd-rip\the trouble with harry\harry.d2v",idct=5) What is "idct=5"? Also I assume that since I use TMPGEnc, I don't need the ConverttoYUY2 line. |
The field order shouldn't matter. Just try with only MPEG2Source and ViewFields lines in your script and you'll see what ViewFields does.
IDCT=5 means that MPEG2DEC3.dll uses SSE2 instructions to decode the video (I have a P4). I don't know if it's any faster than the default IDCT=2 though :wink: I would definitely do the conversion in Avisynth when dealing with interlaced sources; I don't trust any external codecs doing the job for me. With an YUY2 source conversion by a codec should work if you have HuffYUV doing the job, but since HuffYUV doesn't support YV12, I really don't know which codec would do the conversion YV12->RGB24. Maybe you should try ConvertToRGB24(interlaced=true) instead of that ConvertToYUY2(interlaced=true) line in my script. |
ConvertToYUY2(interlaced=true) # CCE wants YUY2
Pardon my confusion, but I capture in YV12 and don't use CCE. Do I still need to convert? I have used ConvertToYUY2(interlaced=true) in the past, when I was capturing with Huffy avi. |
As I said in my answer, you can use ConverttoRGB24(interlaced=true) instead of ConverttoYUY2(interlaced=true) if you use TMPGEnc :wink:
|
Should this interlaced ma script use 'motion estimate search(fast)' in Tmpgenc, like the ma script?
Also, when the avs file is opened in Tmpenc, do you need to choose 'INTERLACED' source (with deinterlaced or progressive sources you have to change this usually to not interlaced or whatever the option is called) ? |
Do make a sample-encoding on maybe 5 Mins incl much variance in frames treatment etc. and do decide on your own as your eyes won't lie :wink:
|
This seems to work for me, it is also 30% faster than encoding without separating fields.
### for interlaced source only LoadPlugin("C:\Filters25\undot.dll") LoadPlugin("C:\Filters25\STMedianFilter.dll") LoadPlugin("C:\Filters25\Convolution3DYV12.dll") AVISource("C:\Documents and Settings\user\Desktop\DVtype2.avi") ConvertToYV12 Levels(0,0.94,255,0,255) SeparateFields() odd=SelectOdd().Kwag_MA1() even=SelectEven().Kwag_MA2() Interleave(even,odd) Weave() Function Kwag_MA1 (clip input1) { undot(input1) Limiter(input1) STMedianFilter(input1,3, 3, 1, 1 ) MergeChroma(input1,blur(1.5)) MergeLuma(input1,blur(0.1)) ScriptClip(" nf = YDifferenceToNext()" +chr(13)+ \ "unfilter( -(fmin(round(nf)*2, 100)), -(fmin(round(nf)*2, \ 100)) ).TemporalSoften( fmin( round(2/nf), 6), \ round(1/nf) , round(3/nf) , 1, 1) ") Convolution3D(input1,preset="movieLQ") Limiter(input1)} Function Kwag_MA2 (clip input2) { undot(input2) Limiter(input2) STMedianFilter(input2,3, 3, 1, 1 ) MergeChroma(input2,blur(1.5)) MergeLuma(input2,blur(0.1)) ScriptClip(" nf = YDifferenceToNext()" +chr(13)+ \ "unfilter( -(fmin(round(nf)*2, 100)), -(fmin(round(nf)*2, \ 100)) ).TemporalSoften( fmin( round(2/nf), 6), \ round(1/nf) , round(3/nf) , 1, 1) ") Convolution3D(input2,preset="movieLQ") Limiter(input2)} Function fmin( int f1, int f2) {return ( f1<f2 ) ? f1 : f2} |
FYI you can use only one function as the content is the same. The actual value of the clip paramater will be instanciated to "even" or "odd" according to whgich line calls the function.
Note: you added a C3D("motionLQ") in the end of the normal MA script ? And you use that on divx ??? Woaow... you really like blurred pictures :-) |
Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.