Quantcast KVCD Predictor - Page 7 - digitalFAQ.com Forums [Archives]
  #121  
11-24-2002, 04:21 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by SansGrip
Quote:
Originally Posted by kwag
It's a BIG bug on TMPEG.
Ouch!

So... Should sample strips be encoded with a system stream or without?

A thought that struck me last night:

If KVCDP only accepted .m1v and .m2v files it would eliminate the possibility that someone might feed it a file with audio too. But I still think we need to take the system stream into account, as it adds ~1.5% to the final result.

So, what if we encode the sample strips as an ES, but I figure the system stream into the final calculations?
Then we can just use 3% as error margin on a ES stream, and 5% for system video stream on KVCDP .

-kwag
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Site Staff / Ad Manager
 
Join Date: Dec 2002
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #122  
11-24-2002, 04:26 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by black prince
Hi Kwag and SansGrip,

Kwag wrote:
Quote:
It's a BIG bug on TMPEG. I tried my older version 2.57 PLUS, and it works correctly using Video only and System video only. So TMPEG 2.59 is broken on System (Video only)
-kwag
I use Tmpgenc Plus 2.59. Does this mean I should go back to 2.57 or
stick with what I have. I'm confused as what to use. I am getting
accurate file size prediction with the manual process. What to do,
what to do

-black prince
Just make your samples as ES (Video only). That's the way I've always done my test strips. You can use TMPEG 2.59, and set the error margin to 3%. That should still provide a final file size just a tad lower (for insurance ) than final predicted size.

-kwag
Reply With Quote
  #123  
11-24-2002, 06:17 PM
Spyglass Spyglass is offline
Free Member
 
Join Date: Apr 2002
Location: Canada
Posts: 34
Thanks: 0
Thanked 0 Times in 0 Posts
I'm really exicted about the predictor, but slight problem, I don't know what the hell to do .... Maybe I'm the only one who hasn't a clue, but could someone post a step by step on what to do to use this? So far I've got this:

Use FitCD to get avs script (if you use 1.05 enable the prediction?)
Read the avs script into Tmpgenc (urm then what?, make a few short samples?)
Make sure whatever sample you have is smaller than the answer from the prediction program?

Any layman help would be great please
I know I'm slow and stuff, all I ask is be patient with us and don't forget us newbies when charging ahead with this awesome development...

spyglass.
Reply With Quote
  #124  
11-24-2002, 09:28 PM
muaddib muaddib is offline
Free Member
 
Join Date: Jun 2002
Location: São Paulo - Brasil
Posts: 879
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by kwag
Quote:
Originally Posted by SansGrip
Quote:
Originally Posted by kwag
It's a BIG bug on TMPEG.
Ouch!

So... Should sample strips be encoded with a system stream or without?

A thought that struck me last night:

If KVCDP only accepted .m1v and .m2v files it would eliminate the possibility that someone might feed it a file with audio too. But I still think we need to take the system stream into account, as it adds ~1.5% to the final result.

So, what if we encode the sample strips as an ES, but I figure the system stream into the final calculations?
Then we can just use 3% as error margin on a ES stream, and 5% for system video stream on KVCDP .

-kwag
Why do that, if we are getting acurate results with ES only with 5% as error?
Reply With Quote
  #125  
11-24-2002, 09:34 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by muaddib

Why do that, if we are getting acurate results with ES only with 5% as error?
Well, I'm getting between -2% to -3% accuracy almost every time. So 3% should bring the prediction almost to 0% offset from predicted to actual file size. This should be verified by others too.

-kwag
Reply With Quote
  #126  
11-25-2002, 02:14 PM
andybno1 andybno1 is offline
Free Member
 
Join Date: Jul 2002
Location: Liverpool, UK
Posts: 832
Thanks: 0
Thanked 0 Times in 0 Posts
Send a message via MSN to andybno1 Send a message via Yahoo to andybno1
will there be a none .net version?
Reply With Quote
  #127  
11-25-2002, 03:22 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by andybno1
will there be a none .net version?
Given that a couple of people have had problems with it, I've decided that once I've got this version working satisfactorily I will port it to regular Win32 code. .NET seems to work fine for 2K and XP but it seems to be not quite stable yet for lesser versions .
Reply With Quote
  #128  
11-25-2002, 03:59 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
@kwag:

I'm interested in finding out why the formula is always out by some percent. I think we can knock 1.5% off that because you encode the sample strips with no system stream, but where does the rest of the difference come from? Ideally one would want to get rid of the scale factor, since it's a bit of a kludge.

There are two culprits: the number of sample points taken, and the length of each sample.

As far as the number of samples goes, I would guess that the formula gets more accurate as you take more samples. However, one reaches a point of diminishing returns, in that it would take too long to encode each sample strip. Maybe 100 is too few?

Instead of each point being a second long, wouldn't it be better to use some multiple of the max frames per GOP, say 36 or 54? Of course, scene-change detection would mess this up. Does scene-change detection really make much of a quality difference since it almost certainly increases file size?

Perhaps if we took, say, 200 samples, each one composed of 2 or 3 whole GOPs, we'd be able to get a more representative sample.

Just some thoughts, probably misguided .
Reply With Quote
  #129  
11-25-2002, 09:19 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by SansGrip
@kwag:

I'm interested in finding out why the formula is always out by some percent. I think we can knock 1.5% off that because you encode the sample strips with no system stream, but where does the rest of the difference come from? Ideally one would want to get rid of the scale factor, since it's a bit of a kludge.

There are two culprits: the number of sample points taken, and the length of each sample.

As far as the number of samples goes, I would guess that the formula gets more accurate as you take more samples. However, one reaches a point of diminishing returns, in that it would take too long to encode each sample strip. Maybe 100 is too few?

Instead of each point being a second long, wouldn't it be better to use some multiple of the max frames per GOP, say 36 or 54? Of course, scene-change detection would mess this up. Does scene-change detection really make much of a quality difference since it almost certainly increases file size?

Perhaps if we took, say, 200 samples, each one composed of 2 or 3 whole GOPs, we'd be able to get a more representative sample.

Just some thoughts, probably misguided .
Hi SansGrip,

The main reason I selected a one second "window" snapshot, was because of the size of the GOP. If you look at 24 NTSC FILM frames on a mpeg created with KVCD's or even standard VCD templates, you'll have two I frames, several B and P frames. So the worse case scenario should always be on a very high action movie with many scene changes, where the GOP is constantly being replenished with new I frames. So on a average movie, the actual file size will always be below the real final file size because of the compression given by the B and P frames. That's our ~2% insurance. It's better to have a -2% accuracy, than have it so tight, that one day we might get a movie that goes over the predicted file size, and then we either have to re-encode the audio at a lower bit rate or overburn the CD-R. At least, that's the way I see it.
As for the 100 samples, I agree that more samples will give a higher accuracy, because we're increasing granularity of the formula. The more, the better. But then also the longer it will take. I have found that with 100 samples, even on a extremely long movie, like "The Green Mile" which is 3 hours long, the prediction still came out within the ~2% of final size. So on average 2 hour films, I think 100 is more than enough.
But hey, any improvements are always welcome .

-kwag
Reply With Quote
  #130  
11-26-2002, 05:17 AM
ozjeff99 ozjeff99 is offline
Free Member
 
Join Date: May 2002
Location: Sydney, Australia
Posts: 159
Thanks: 0
Thanked 0 Times in 0 Posts
Been noticing the moves to get accurate results. Has anyone picked up that the 100 x 1 sec sample when viewed after Tmpgenc has finished encoding shows it as 2525 frames and 1:41 (101 sec)? If it has already been mentioned sorry i missed it.
Reply With Quote
  #131  
11-26-2002, 05:49 AM
ozjeff99 ozjeff99 is offline
Free Member
 
Join Date: May 2002
Location: Sydney, Australia
Posts: 159
Thanks: 0
Thanked 0 Times in 0 Posts
I'm using PAL of course.
Reply With Quote
  #132  
11-26-2002, 10:15 AM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by ozjeff99
Been noticing the moves to get accurate results. Has anyone picked up that the 100 x 1 sec sample when viewed after Tmpgenc has finished encoding shows it as 2525 frames and 1:41 (101 sec)? If it has already been mentioned sorry i missed it.
Yep. It's a little bit off, but the sample will always be the same size for each frame rate. With 29.97 fps sources the sample strip should be around 3,146 frames.
Reply With Quote
  #133  
11-26-2002, 07:55 PM
muaddib muaddib is offline
Free Member
 
Join Date: Jun 2002
Location: São Paulo - Brasil
Posts: 879
Thanks: 0
Thanked 0 Times in 0 Posts
Yep... I think that's because we have to round the framerate in order to use it as an argument of the SelectRangeEvery function.
Reply With Quote
  #134  
11-26-2002, 07:58 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
The good o'l "Off-By-One" bug

-kwag
Reply With Quote
  #135  
11-27-2002, 04:35 AM
Jellygoose Jellygoose is offline
Free Member
 
Join Date: Jun 2002
Location: Germany
Posts: 1,288
Thanks: 0
Thanked 0 Times in 0 Posts
hmmm

I was just gonna ask if there will be a .net runtime independent version of kvcd predictor... i have windows xp, but i'm not willing to download that 20.4 megs runtime library... would have benn great though...
__________________
j3llyG0053
Reply With Quote
  #136  
11-29-2002, 12:51 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
So how are people finding it? Is it as accurate as doing it manually? Does the helper work okay?

I can't make it better if you don't tell me how .
Reply With Quote
  #137  
11-29-2002, 01:54 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Hi SansGrip,

So far, so good
I've done a couple of movies, and the calculation are correct.

-kwag
Reply With Quote
  #138  
11-29-2002, 03:13 PM
Mario Mario is offline
Free Member
 
Join Date: Apr 2002
Location: Staffordshire, UK
Posts: 26
Thanks: 0
Thanked 0 Times in 0 Posts
Until now I have used a spreadsheet to do prediction calculations. The predictor is 'spot on' thanks!
__________________
Frank
Reply With Quote
  #139  
11-29-2002, 04:06 PM
nicksteel nicksteel is offline
Free Member
 
Join Date: Nov 2002
Posts: 863
Thanks: 0
Thanked 0 Times in 0 Posts
deleted
Reply With Quote
  #140  
11-29-2002, 06:01 PM
black prince black prince is offline
Free Member
 
Join Date: Jul 2002
Posts: 1,224
Thanks: 0
Thanked 0 Times in 0 Posts
Hi SansGrip,

SansGrip wrote:
Quote:
So how are people finding it? Is it as accurate as doing it manually? Does the helper work okay?

I can't make it better if you don't tell me how .
_________________
Regards,
SansGrip
I can only speak for myself when I say .NET and avisynth 2.5 beta
problems pretty much stop me from testing KVCD Predictor. Also
discovering that Blockbuster noise causes file size to change with the same
settings, I am wondering how KVCD Predictor can be accurate or is
this being compensated for? I am sure it's works for some who have
solved the setup to use it, but re-installing my operating system just
to get rid of the .NET (add/remove still doesn't work) and install it
again is something I just don't time to do.

-black prince
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
KVCD Predictor-0.1b download? hansolh960 Video Encoding and Conversion 1 01-03-2012 01:58 PM
KVCD Predictor or Manual Prediction vhelp Avisynth Scripting 12 08-08-2004 02:59 AM
KVCD Predictor? FlavioMetal Conversão e Codificação de Vídeo (Português) 0 12-16-2003 08:52 AM
Kvcd Predictor DKruskie Avisynth Scripting 4 05-14-2003 06:44 PM
KVCD Predictor is out! kwag Video Encoding and Conversion 0 11-18-2002 06:25 PM

Thread Tools



 
All times are GMT -5. The time now is 10:56 PM  —  vBulletin © Jelsoft Enterprises Ltd