digitalFAQ.com Forums [Archives]

digitalFAQ.com Forums [Archives] (http://www.digitalfaq.com/archives/)
-   Video Encoding and Conversion (http://www.digitalfaq.com/archives/encode/)
-   -   To crop or not to crop! (http://www.digitalfaq.com/archives/encode/1841-crop-crop.html)

rendalunit 12-19-2002 08:38 PM

I'm getting ready to encode Pearl Harbor to one cd with the LBR temp and new GOP. I encoded a test sample with no filters and the Gibbs effect was very bad- so now I'm trying out SansGrips filters for the first time! :D

I used this script:
Code:

LoadPlugin("C:\encoding\MPEG2DEC2.dll")
LoadPlugin("C:\encoding\fluxsmooth.dll")
LoadPlugin("C:\encoding\blockbuster.dll")
LoadPlugin("C:\encoding\legalclip.dll")
AviSource("D:\PEARL_HARBOR_DSC1\VIDEO_TS\TEST.avi")

Blockbuster( method="noise", detail_min=1, detail_max=50, variance=5, cache=1024 )
Blockbuster( method="sharpen", strength=100)
FluxSmooth()

LegalClip()

It helped reduce the Gibbs effect and the DCT blocks a lot! The mean and strength values are much more radical than what everyone else is using though :?: Does anyone have a good script for reducing Gibbs effect or for cramming 3 hours onto a cd?

btw- excellent documentation with your filters :)

thanks.

SansGrip 12-19-2002 08:53 PM

Quote:

Originally Posted by rendalunit
so now I'm trying out SansGrips filters for the first time! :D

You're only just trying them?? I'm offended! :evil: hehe just kidding :).

Quote:

I used this script:
Those are very strong values for Blockbuster. One reason you're having to go so high might be because you're smoothing after adding noise -- generally not a good idea, since you're adding noise then taking it away again ;).

I generally use:

Code:

BlahSource("foo.bar")
#IL = Framecount / 100
#SL = round(Framerate)
#SelectRangeEvery(IL, SL)
Crop(...)
LegalClip()
BilinearResize(...)
FluxSmooth()
Blockbuster(method="noise")
AddBorders(...)
LegalClip()

I find Blockbuster's default parameters usually are adequate. However since you're encoding such a long movie it might not hurt to up the variance a little (though I personally wouldn't go over 1.5 or 2). You might not be able to see much of a difference, but the encoder will definitely notice it.

As for sharpening, I generally don't do it. In my experience it increases the file size without any real benefit. If I think the movie could do with a little sharpening I'll usually switch to bicubic or Lanczos resizing, which result in a sharper image than bilinear.

Quote:

btw- excellent documentation with your filters :)
Thanks! First time anyone's said that :). I try to be as clear as possible, but it's difficult to write effective docs when you know the thing inside and out... Very easy to forget something important.

kwag 12-19-2002 09:08 PM

Quote:

Originally Posted by SansGrip

Quote:

(3)---Use predictor factor as 0.89
As far as I know this is still up in the air. We'd appreciate it if you could test a few values for us, though -- say, ranging from 0.85 to 1.0... ;)

Up, up in the air, the beautiful baloons..... :D Yes, this is still not the final number. The ideal thing is to find constant prediction with all resolutions, and then applying the .95 as before. This will create less chaos 8O That's what I'm doing right now, and probably won't go to sleep until I get it right :wink:

Quote:

then we will be able to fit 90 mins into ONE CD-R????
Quote:

Yesterday I put American Pie (1h35m) on one CD at 704x480. Looks awesome.
Red Planet (106 minutes) on one CD at 704x480. I still can't believe what I see 8)

-kwag

black prince 12-19-2002 09:52 PM

@SansGrip and Kwag,

It would be nice to resolve the question of Avisynth vs Tmpgenc resize
effecting file size. Encoding with Tmpgenc's resize is Sloooooow!! :cry:
I even wonder if using Tmpgenc's mask for borders couldn't be corrected
from avs script. If crop is cutting borders top and left then the leakage
has to be bottom and right that Tmpgenc's mask is ignoring for file
size savings. :?: Is there a filter to mask borders the same as Tmpgenc :?:

-black prince

SansGrip 12-19-2002 10:12 PM

Quote:

Originally Posted by black prince
Is there a filter to mask borders the same as Tmpgenc :?:

Yep. Use AddBorders if you want to, well, add borders, or Letterbox if you want to do the equivalent of TMPGEnc's "mask" feature.

kwag 12-19-2002 10:45 PM

Update flash :wink:

Just finished my 352x240 LBR MIN=300Kbps, MAX=1,800Kbps test with "Bug's life" for the second time today. Instead of using 100 samples at 24 frames each, I used 50 samples of 48 samples each. The file size prediction came out as follows: Wanted file size: 725,112KB. Final file size: 727,262K :mrgreen:
That was with prediction factor at 1.0, so for safety, I'll set that now to 0.98, and that should be good enough. The previous encode with 100 samples of 24 frames each, came out to ~800MB 8O
Now I'm encoding "Red Planet" at 704x480 with the same formula adjustments and Blockbuster noise filter. Four hours to go..... :wink:

-kwag

kwag 12-19-2002 10:48 PM

@SansGrip,

I believe this would mean a value of 2 instead of 5 in KVCD Predictor :?:

-kwag

SansGrip 12-19-2002 11:33 PM

Quote:

Originally Posted by kwag
I believe this would mean a value of 2 instead of 5 in KVCD Predictor :?:

Yep. I'm sure you already know this, but for others reading: if you use 50 samples of 48 frames with a script like this:

Code:

IL = Framecount / 50
SL = round(Framerate) * 2
SelectRangeEvery(IL, SL)

then you can leave the KVCDP "Sample Points" setting at 100. While we are really using 50 sample points, each one is twice as long as before. Since there's (yet) no way to enter the sample length, by pretending we're using 100 1-second sample points it'll still give an accurate prediction.

@kwag

I'll try 50/48 in a moment, but I'd be interested to hear if you get even more accurate results with 100 samples of 48 frames (Sample Points set to 200, if you're using KVCDP), or whether 50 is indeed enough. Care to run another encode? :mrgreen:

SansGrip 12-19-2002 11:38 PM

Quote:

Originally Posted by kwag
The previous encode with 100 samples of 24 frames each, came out to ~800MB 8O

It makes sense to me that using a longer sample gives better accuracy. I think what we can take from this is that the sample length should be double the GOP length...

While the number of sample points is of course important, all it's really doing is giving us more representative material to work with than simply taking 2400 frames from the beginning of the movie. I think with almost all movies it should be adequate to sample 50 times.

black prince 12-19-2002 11:43 PM

Hey Kwag,

I hope your using the latest Blockbuster 0.6 with seed parm:

LoadPlugin("E:\DVD Backup\2 - DVD2SVCD\MPEG2DEC\MPEG2DEC.dll")
LoadPlugin("E:\DVD Backup\2 - DVD2SVCD\BlockBuster\BlockBuster.dll")
LoadPlugin("E:\DVD Backup\2 - DVD2SVCD\LegalClip\LegalClip.dll")
mpeg2source("D:\Temp\movie.d2v")
LegalClip()
LanczosResize(496,448)
Blockbuster( method="noise", detail_min=1, detail_max=10, variance=1, seed=1 )
AddBorders(16,16,16,16)
LegalClip()


Seed uses the same frequency so that file size won't vary when using
noise. 8) Just encoded Minority Report using 528x480, CQ_VBR=11,
audio 128kb on 2 CD's (800mb each). 1.4 GB. The quality is the best
I've seen. Even Gibbs effect had almost vanished. Fast action scenes were
near perfect. This is by far the best picture quality yet. The DVD and
the backup look the same. Since I could have used a higher CQ_VBR
to fill up the CD's, I'm going to wait until the file prediction formula
is finalized :D

-black prince

SansGrip 12-19-2002 11:54 PM

Quote:

Originally Posted by black prince
The quality is the best I've seen. Even Gibbs effect had almost vanished. Fast action scenes were near perfect. This is by far the best picture quality yet. The DVD and the backup look the same.

Sounds awesome :). I was actually planning on renting that movie tomorrow with a view to buying it if it's as good as people say...

Right now I'm seeing if I can get Untouchables (1h59m) on one disc at 528x480. Maybe I'm dreaming, or maybe not..... :D

kwag 12-20-2002 12:08 AM

Quote:

Originally Posted by SansGrip
Quote:

Originally Posted by kwag
The previous encode with 100 samples of 24 frames each, came out to ~800MB 8O

It makes sense to me that using a longer sample gives better accuracy. I think what we can take from this is that the sample length should be double the GOP length...

Exactly :D . I am trying to get by with this, because 100 samples of 48 frames is 200 seconds 8O. Just a little too long for two or three sample encodes. If the 704x480 encode I'm doing right now targets the predicted size, I think the above should be enough. As you said, two complete GOP sizes per snapshot should have a pretty good prediction. Maybe 60 or 70 snapshots of 48 will do as good as 100 :idea: So maybe a compromise of more than 50 but not quite as high as 100 for fine tunning :idea:

-kwag

kwag 12-20-2002 12:23 AM

Quote:

Originally Posted by black prince
Hey Kwag,

I hope your using the latest Blockbuster 0.6 with seed parm:

Blockbuster( method="noise", detail_min=1, detail_max=10, variance=1, seed=1 )

I'm using: Blockbuster( method="noise", detail_min=1, detail_max=10, variance=.5, seed=1 ) :wink:

-kwag

SansGrip 12-20-2002 12:27 AM

Quote:

Originally Posted by kwag
If the 704x480 encode I'm doing right now targets the predicted size, I think the above should be enough.

I'm encoding Untouchables (1h59m) right now: 528x480, new GOP, CQ_VBR 9.25, error margin 0%, sample points 50, sample size 48. Audio: 128kb/s, 109mb. Predicted size: 803mb. Time remaining: 3h19m. I'll let you know in the morning ;).

9.25 (by the way, I added a decimal place in the encoding helper ;)) is a touch on the low side, but the samples looked pretty good to me. If the Gibbs is too noticible on the TV I'll reencode at 352x480 with the new GOP and formula.

I'm beginning to think that with the various filters you've crafted there are very few movies we can't get on one disc... 8O

Quote:

Maybe 60 or 70 snapshots of 48 will do as good as 100 :idea: So maybe a compromise of more than 50 but not quite as high as 100 for fine tunning :idea:
Yep, possibly. I'm still hoping that 50/48 is going to be the answer for this new GOP, since I want the whole prediction process to go as fast as possible. Who knows, maybe even less than 50 would give good results with the larger sample length... We won't know until we test :).

SansGrip 12-20-2002 12:28 AM

Quote:

Originally Posted by kwag
I'm using: Blockbuster( method="noise", detail_min=1, detail_max=10, variance=.5, seed=1 ) :wink:

Just so you know, because detail_min=1 and detail_max=10 are the defaults, you could simply use: Blockbuster(method="noise", variance=.5, seed=1)

Slightly less typing ;).

Edit: Heck, you could even use Blockbuster(method="noise", variance=.5, seed=5823) :mrgreen:

kwag 12-20-2002 12:43 AM

Quote:

Originally Posted by SansGrip
Quote:

Originally Posted by kwag
I'm using: Blockbuster( method="noise", detail_min=1, detail_max=10, variance=.5, seed=1 ) :wink:

Just so you know, because detail_min=1 and detail_max=10 are the defaults, you could simply use: Blockbuster(method="noise", variance=.5, seed=1)

Slightly less typing ;).

Edit: Heck, you could even use Blockbuster(method="noise", variance=.5, seed=5823) :mrgreen:

Or maybe: Blockbuster(method="noise", variance=.5, seed=atoi("SansGrip") :mrgreen:

-kwag

muaddib 12-20-2002 12:44 AM

Quote:

Originally Posted by SansGrip
Funnily enough I just got by far the most accurate results yet by taking 100 snapshots of 48 frames, and fooling KVCDP into working with it by setting the "Sample Points" to 200. We're doing that weird "wavelength" thing again ;).

Predicted size was 812mb with an error margin of 0%, final size is 808mb 8O.

It does take longer to encode the sample strips, but it's surely worth it if it makes the prediction more accurate...

Hi SansGrip!

I've been doing that (100 snapshots of 2 seconds) for some time now.
My DVD doesn’t like mpeg1 vbr with resolutions above 352x240, so if I want anything else I have to use mpeg2. And, I don't know why, I couldn’t get accurate prediction with the default scrip (and mpeg2), so I tried exactly what you said and it gave me good accuracy.

I never tried 50 snapshots... but maybe it work good to.

SansGrip 12-20-2002 12:48 AM

Quote:

Originally Posted by kwag
Or maybe: Blockbuster(method="noise", variance=.5, seed=atoi("SansGrip") :mrgreen:

heheheh

Let's see... "SansGrip" is 53616E7347726970 in hex, which in decimal is 6,008,204,819,287,927,152. Nope, wouldn't fit ;).

SansGrip 12-20-2002 12:50 AM

Quote:

Originally Posted by muaddib
And, I don't know why, I couldn’t get accurate prediction with the default scrip (and mpeg2), so I tried exactly what you said and it gave me good accuracy.

Pretty similar to my experience. Try 50 samples of 2 seconds each. Heck, try fewer than 50. The more people we have testing this the quicker we'll work out the optimal formula :D.

kwag 12-20-2002 12:53 AM

Quote:

Originally Posted by SansGrip
Quote:

Originally Posted by kwag
Or maybe: Blockbuster(method="noise", variance=.5, seed=atoi("SansGrip") :mrgreen:

heheheh

Let's see... "SansGrip" is 53616E7347726970 in hex, which in decimal is 6,008,204,819,287,927,152. Nope, wouldn't fit ;).

Ah! bummer!, :mrgreen: don't you just love the users who leave the keyboard sticked on a 4 character field without boundary checks :mrgreen: , HAHAHA Leak, leak. Then when you press "ENTER" boom! :mrgreen:

-kwag


All times are GMT -5. The time now is 09:15 AM  —  vBulletin © Jelsoft Enterprises Ltd

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.