Quantcast TMPGEnc 2-Pass Engine. a Key to Faster CQ Prediction. - Page 3 - digitalFAQ.com Forums [Archives]
Go Back    digitalFAQ.com Forums [Archives] > Video Production Forums > Avisynth Scripting

Reply
 
LinkBack Thread Tools
  #41  
11-05-2003, 10:44 AM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by incredible
Tonight will be the night (I mean this evening *lol*)

Quote:
Do you will stay tuned too?? If yes gimme your ICQ via PN, so we can figure out togehter
I would like to, but i can't. I'm going out today (cinema). My girlfriend wants to watch a love story I've already promised her.
Quote:
BTW: Did you recognised that Kwag changed us to moderators? I was really surprised
Yep, this morning Nice surprise
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Site Staff / Ad Manager
 
Join Date: Dec 2002
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #42  
11-05-2003, 11:04 AM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Krassi
Prediction gave a CQ of 31 (filesize is 70.026 KB).
Doing a full encode now with 31.
Final size: 1.471.969 KB
Wanted VS: 1317910 KB
Reply With Quote
  #43  
11-05-2003, 11:14 AM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
@incredible:
We could combine our theories:
Why not making a long VBR sample, then make two or more short different predicions (different source range as you described) from that long source range from VBR and then take the average
Reply With Quote
  #44  
11-05-2003, 11:35 AM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Hi Krassi,

Have you tried a 2-pass encode on a very long sampler
Maybe 5 minutes of slices, and then match the file size of that sampler with CQ
Instead of doing a 2-pass of a continuous 5 minute area, the longer sliced sampler can "see" more of the footage, and then the CQ encode should be closer to target.

-kwag
Reply With Quote
  #45  
11-05-2003, 12:53 PM
incredible incredible is offline
Free Member
 
Join Date: May 2003
Location: Germany
Posts: 3,189
Thanks: 0
Thanked 0 Times in 0 Posts
Send a message via ICQ to incredible
@ Krassi

Well I hope I did understad you well...

What I gonna do this evening is to create 2 AVS scripts including for example Sampler(Lenght=75, Samples=100) that would give a good average of the movies contents. The Second script will include a Trim line so the Sampler starts counting at 5 min offset. For the Beginning a big Sampler size but ... Ill see what comes out!

After this Ill see first the average! And that will be included within my next predictions continuing only with script one (Like TOK uses).

But the only thing is, that CCE does work with the opposite o CQ Values and it seemed to me that theres no stop at 100!!! which would mean worst quality.

Because I already did a test in the past:

I calculated with Calcumatic the needed avrg. Bitrate, for example 700kbit
Then I wrote a script just including Sampler() and I started the encoding in CCE using a assumed CQ. During encoding you can see the average bitrate CCE is encoding with during the encoding of the sampler() based stream. And almost at the end you can see the avrg Bitrate CCE used. but in MBit! so I had to multiplicate by 1024 to get the same result in kbit, ... well it took some turns but finally it was very accurate.
The only shitty thing was, that avisynth sometimes crashes when starting the new turn after a new CQ was set. I saw this cause suddenly when encoding began ... speed rised up to 4x and the result was a black frame ... incuding "blablabal... exeption... blablabla". When I quit CCE and bean as new everything was fine

So i thought about to do a prediction test using the way above also using MC Encoder cause in comparison to CCE its posible to manipulate the GOP Sequence better .


We'll see ....


PS!: Another way!!

Posting EDITed! I figured the formula out (Time now 8:46 pm)

I wrote a script which gives me an almost 5% moviesize sliced sampler() based samplestream including the possibility to generate an offset which will be determined by using the varaible called "off". off= offset seconds.
Also including subtitels which show the actual calculations on screen during the workaround . This sampler script can be used on every source no matter if 23.976, 25.000 or 29.976, its smart and based on avisynths framecount/framerate detection commands.
Sampler length is set to 3*source FPS! As recommendet also when using TOK
Thats Means:
Sampler( samples= (total Numbers of Frames/10)/(actualframerate of the source*3)/2, length = (framerate*3))
Code:
################################################
######## Inc's 5% sampling movie script ########
################################################
mpeg2source("H:\AMERBEAUTY\ambeauty.d2v")
off=0 # offset in seconds
Trim(round(framerate()*off), framecount()) 
GripCrop(352,288)
GripSize(resizer="BilinearResize")
GripBorders()
Letterbox(0,0,8,8) 
############# strings just for the testing workout ############ 
Subtitle("Frames total movie/unsliced: "+String(framecount()),10,13)
Subtitle("samples : "+String(round(((framecount())/10)/(framerate()*3))/2),10,28)
Subtitle("sample length  : "+String(round(framerate()*3)),10,43)
Subtitle("= ca. "+String(100/10/2)+"% of Movie total",10,58)
Subtitle("offset  : "+String(off)+" sec.",10,73)
############### The smart sampler routine ############### 
sampler(samples=(round(((framecount())/10)/(framerate()*3))/2), Length=(round(framerate()*3)))
Now Im going to do 2pass encoding using TmpgEnc and 808,5kbit as calculated in Calcumatic .....

... to be continued

------------------------------ next step -----------------------------------

... 2pass encoding took 00:03:55 result is 33,2 MB on HD
means 33,2*2=66,4*10= would be 664,0 MB m1v final filesize!

... to be continued

---------------------------------- next step ---------------------------------

... just assuming CQ80 to encode with ... took 00:01:59 result is 30,5 MB means 30,5*2=61,0*10= would be 610,0 MB m1v final filesize

... to be continued


---------------------------------- next step ---------------------------------

Now we perform the offset!
To get in average much more samples during our workout we perform on the second 2pass step an offset about 30sec. That means we give the variable "off" a value of 30.

Code:
mpeg2source("H:\AMERBEAUTY\ambeauty.d2v")
off=30 # offset in seconds
Trim(round(framerate()*off), framecount())
GripCrop(352,288)
GripSize(resizer="BilinearResize")
GripBorders()
Letterbox(0,0,8,8) 
############# strings just for the testing workout ############ 
Subtitle("Frames total movie/unsliced : "+String(framecount()),10,13)
Subtitle("samples : "+String(round(((framecount())/10)/(framerate()*3))/2),10,28)
Subtitle("sample length  : "+String(round(framerate()*3)),10,43)
Subtitle("= ca. "+String(100/10/2)+"% of Movie total",10,58)
Subtitle("offset  : "+String(off)+" sec.",10,73)
############### The smart sampler routine ############### 
sampler(samples=(round(((framecount())/10)/(framerate()*3))/2), Length=(round(framerate()*3)))
Now encoding the sample incl. offset of 30 sec. again using 2pass

... to be continued


---------------------------------- next step ---------------------------------

... 2pass encoding incl. offset=30 the result is 33,2 MB on HD
means 33,3*2=66,6*10= would be 666,0 MB m1v final filesize! Still seems to be ok.

So the average of 664MB (2pass, offset 0) and 666 MB (second 2pass, offset 30) is 665MB ... and that we should reach finally!

... again setting CQ80 incl. offset=30 to encode with ... result is 32,1 MB means 32,1*2=64,2*10= would be 642,0 MB m1v final filesize do you see how importand the Offset is! Final size/same CQ = total different result would come out!

So the average of 610,0 MB (CQ80, offset 0) and 642,0 MB (second CQ80 offset 30) is 626 MB!

So we got the 2 pass average result of 665 in comparison to the CQ average result of 626 MB at CQ 80

If 626 = CQ80 than 665 should be .... ähhhmm CQ 84.984!!

( @ Krassi ... a nice site to do the "dreisatz" calculation you can find here: http://www.mathepower.com/dreisatz.php

BUT!! When encoding the sliced stream again using offset=0 and the calculated CQ 84.984 ... the result is 41.589MB!!! That means 831,78 MB m1v final size!!!
Thats what you meeat by saying CQ is not linear ... isn't it Kwag???
And as I know myself very well .... and as I can very very still remember now, ... wasn't this already mentioned in HERE!??? :banghead: :banghead: :banghead:
Yeah! Typical me just running forward

So now starting to find the right CQ by step by step rising

... to be continued

---------------------------------- next step ---------------------------------

CQ 80,35 @ offset 0 and offset 30 gave me the almost the same sizes like the 2pass offset 0/offset 30 samples .... so i let it run ... will take approx. 41 Min for the whole movie (117min/PAL)

... to be continued (tomorrow )
Reply With Quote
  #46  
11-06-2003, 01:03 AM
incredible incredible is offline
Free Member
 
Join Date: May 2003
Location: Germany
Posts: 3,189
Thanks: 0
Thanked 0 Times in 0 Posts
Send a message via ICQ to incredible
---------------------------------- the next day ---------------------------------

CQ 80,35 gave me a final filesize of 655 adding audio mp2 (109mb) gives me 764MB in total, well 785 where wanted ..

wow! 97.325% of the wanted filesize on the first try thats sexy!

Now this 2,675% difference could come from my 80,35 CQ step by step trying which is for shure not 100% accurate AND in the script there we use the command round()!!! which could also kill a 100% accuracy:
(round(((framecount())/10)/(framerate()*3))/2) !!

But I think the using an "offset" ping pong incl. 2pass as first is the way!
So Krassi & Kwag we should also try this script on CCE (as you also worked with Krassi BUT my CCE don't give me the possibility to encode 2pass vbr using mpeg1!) and Mainconcept Encoder.

Reply With Quote
  #47  
11-06-2003, 01:30 AM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by kwag
Hi Krassi,

Have you tried a 2-pass encode on a very long sampler
Maybe 5 minutes of slices, and then match the file size of that sampler with CQ
Instead of doing a 2-pass of a continuous 5 minute area, the longer sliced sampler can "see" more of the footage, and then the CQ encode should be closer to target.

-kwag
That was what i tried to say, sorry
I'll test this today or this evening. The longer 2-pass should give an more accurate filesize, then matiching the filesize for 5 or 6 minutes with different source ranges as incredible says could be the key
Reply With Quote
  #48  
11-06-2003, 01:40 AM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
Hi incredible,
that's too much to quote
Well, you did a great job, i haven't been productive yesterday (and the movie wasn't really good).
I'll do a test with your scipt, seems to be great, maybe a longer sample size would be good. But from my experience from Tok and as you said this sampler length should be ok.
The CQ non-linearity means that filesize doesn't behave linear with CQ. So you're right that an average could be a better value, but that's the mathematical problem of this theory. But if i'm right this should still remain better than only one sr without a trim
Have you made a test with a long VBR sample
If i understand you right (it's still early in the morning) you have tested a CQ prediction
Reply With Quote
  #49  
11-06-2003, 01:44 AM
incredible incredible is offline
Free Member
 
Join Date: May 2003
Location: Germany
Posts: 3,189
Thanks: 0
Thanked 0 Times in 0 Posts
Send a message via ICQ to incredible
Krassi, use the script above to split the whole movie into 3sec parts and a smart auto routine calculated number of samples which give you a 5% amount of the movie. That should be sufficient to do a precise prediction no matter if used CCE or MC Encoder or TmpgEnc.

I choosed 5% cause you can simple calculate what would come out ........5% Samplesize *2*10 = "would" be 100% Final size

EDIT: Hey Krassi, I didn't saw your last posting! (as you said it was early in the morning! *g*)

Well I think *summasummarum* 5% of the movie incl. 75 samples each slice will be enough.

Here's an update including the possibility to choose more settings cause of adding more variables.

Code:
###################  The source ####################
mpeg2source("H:\Path to your source.d2v") 
################# important variables #################
off=0 # offset in seconds when the sampling on the stream starts defaults = "0" or "30"
lm=3 # Sample length multiplicator, as known 3*FPS is best for accurate Prediction so default is lm=3
sa=2 # Total size of sample stream! 2= 5% of the whole movie and 1=10%, default is 2! (do not use 0!! This would cause a division by Zero!!)
################## Setting the offset #################
Trim(round(framerate()*off), framecount()) 
### In case of our workout we resize to 352x288 just for encoding speed##
GripCrop(352,288) 
GripSize(resizer="BilinearResize") 
GripBorders() 
Letterbox(0,0,8,8) 
############# subtitels just for the testing workout ############ 
Subtitle("Frames total movie/unsliced : "+String(framecount()),10,13) 
Subtitle("samples : "+String(round(((framecount())/10)/(framerate()*lm))/sa),10,28) 
Subtitle("sample length* : "+String(round(framerate()*lm)),10,43) 
Subtitle("= ca. "+String(100/10/sa)+"% of Movie total",10,58) 
Subtitle("offset* : "+String(off)+" sec.",10,73) 
############### The smart sampler routine ############### 
sampler(samples=(round(((framecount())/10)/(framerate()*lm))/sa), Length=(round(framerate()*lm)))
I added the variable "sa" to set the whole size of the sampler stream!
By choosing the variable "sa=1" it would give you a movie 10% Samplerbased stream, cause you asked for a longer sample stream!
Also I implementated now (Im at work so I just wrote it in here) a variable called "ml" this variable means the value of the sample lenght as known to set to 3*FPS which will give best prediction results!
So use the latest script in this reply when doing your tests!

Refering to your other question:
I did the tests as shown above --- same sample stream used for 2pass and CQ encoding --- and same offset sample stream used for 2pass and CQ encoding .... I think that should be the way how to do it.
Reply With Quote
  #50  
11-06-2003, 05:32 AM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
I've done some more tests:
"Master" is still the long VBR sample (30 min.).
I've done 4 predictions in CQ mode with a length of 9000 frames, so optimal sample size should be 69.632KB.
I've selected different parts of the movie to predict these 9000 frames.
Here the resulting CQ's:
30,45,50,33
so the average would be 39,5
I'll do a full encode with this CQ value.

@incredible:
I'll set up a test this evening or later on. The test above with time slices should even give better values
I will also switch to TMPGenc later on.

EDIT: I know of the non-linearity of CQ but the average should still give a better value
Reply With Quote
  #51  
11-06-2003, 06:48 AM
incredible incredible is offline
Free Member
 
Join Date: May 2003
Location: Germany
Posts: 3,189
Thanks: 0
Thanked 0 Times in 0 Posts
Send a message via ICQ to incredible
Quote:
"Master" is still the long VBR sample (30 min.).
Is this just a "cut" of a part of the whole movie-stream or a stream-package of slices like a sampler() command generates??
I do not remeber now your last status.
Reply With Quote
  #52  
11-06-2003, 06:51 AM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Krassi
so the average would be 39,5
I'll do a full encode with this CQ value.
Results:
Wanted VS: 1.317.910 KB
Final VS: 1.302.104 KB
That is 0.1 %

EDIT: BTW, i'm still using a range from 300-5000, and thats really hard to predict
Reply With Quote
  #53  
11-06-2003, 07:43 AM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
@Kwag:
Do you think it is possible to make CQMatic take a 2-pass VBR sample at the beginning, which has a doubled sample size or similar. After that you could let it do the job like it is know doing
I'm just redoing my last test with TMPGenc now.
It's more time-consuming now
Reply With Quote
  #54  
11-06-2003, 08:31 AM
incredible incredible is offline
Free Member
 
Join Date: May 2003
Location: Germany
Posts: 3,189
Thanks: 0
Thanked 0 Times in 0 Posts
Send a message via ICQ to incredible
Hey Krassi,

first congratulations to your 0.1%-Filesize-Acurazzzzi-Hype!

Only to keep me into your prozess following ... this acuracy was maintained by using 1 complete movie-part of 30mins and you took this
30mins beginning at several timestamps of the movie and therefore
you took the average of the CQ results to receive a final average CQ to full encode with did I understand this right?

Because you don't need these "long" manual offset Arie to get a good amount of scenes of your movie. Cause IF your Sample takes 30mins and you just move it within the timeline there will be much frames double encoded.... ?? (well this logic depends if I understood your workout right as asked above *g*)

So to get "large" samples using the script above (no matter in which encoding application) you simple could rise the multiplier "lm" variable to higher than 3, so for example you choose 6 and therefore you'll obtain 6second-slices in a stream of 5% of the whole movie and if you also set the "sa" to 1 you will therefore obtain 6 sec. slices in a sampler of 10% of the whole movie .... and you can go further by just changing the "../sa..." parts in the script to "..*2" which will give you 20%!!! samplesize of the whole movie (*3=30%, *5=50% and so on).
And I think thats what you want to have ... Big 2pass-VBR stream pre-encodings to compare.
But I think we should find a good slice based way which doesn't take too much time for first-2pass-pre-encoding?

BTW: This workout we are doing here "rocks"!
Reply With Quote
  #55  
11-06-2003, 08:59 AM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
I'm a bit confused today.
So please help me out:
Why doing a VBR 2-pass
We could calculate the size the optimal VBR encoding should have with
Code:
Videosize in KB=Average bitrate of complete movie/8 * minutes  * 60
In my example, the optimal filesize of the VBR would be
1568/8x30x60=352.800 KB, the encoded was 348.163.
So why should we do this

@incredible:
seen your post in preview.
exactly, but now i think the VBR encoding is really not needed anymore...

EDIT: typo
Reply With Quote
  #56  
11-06-2003, 11:07 AM
incredible incredible is offline
Free Member
 
Join Date: May 2003
Location: Germany
Posts: 3,189
Thanks: 0
Thanked 0 Times in 0 Posts
Send a message via ICQ to incredible
MAN I think you're right!

(All the others who already went into the deep refering prediction like the authors of TOK etc. maybe laugh when watching the last 4-5 Posts?)

Because .... if I want to have a final Filesize incl. Audio of 785 MB
Shure the 2pass engine would give at its average bitrate setting even when using sliced or partioned streams always an outputsize that in multiplikation by the sampling size (or so) would give the ca. 785MB minus Audio MB size cause its 2pass.

Now .. (Im referring to my script where maybe defenitely no 2pass vbr as first turn is needed) ... I said the "default" calculation of my sampler AVS routine gives me a 5% of the whole movie output which should match 5% of 785(-Audiosize)MB IF CQ is set right!
So the real advantage which I can use is the offset ping-pong in every CQ encoding until the average of BOTH (ping and pong) streams will match the 5% of 785(-Audiosize)MB.
And finally thats what I did at least yesterday evening when recognising that the mathematical calcuation does NOT match (but it was mabye too late to recognis what this could mean in a whole *ggg*) and the 2pass output only makes sense IF we could use a mathematical formula like the "dreisatz" (If a = b therfore c = d) BUT as we found out that this failures cause of the non-linear CQ encoding refering to its values ..... does it make sense??? And here Im at the same assumption like you that this will not work (2pass/CQ) prediction cause of non-linear CQ Values!

Confusing, confusing, confusing .......

PS: Where's Kwag and the others??
Maybe someone with objective view (or reset'ed brain) should analyze our last methods
Reply With Quote
  #57  
11-06-2003, 12:57 PM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
I have no headac3 anymore so i can see it clearly now .
And i still think we won't need a prediction wit VBR, we can assume that the formula above will give the right bitrate for sure.
And thats how we predicted for, just expressed in another way...

We can't use the "dreisatz" "rule of three" because of the non-linearity. But i think we could use an average fo the CQ predictions. Or we could setup only one larger sample
Reply With Quote
  #58  
11-06-2003, 04:45 PM
incredible incredible is offline
Free Member
 
Join Date: May 2003
Location: Germany
Posts: 3,189
Thanks: 0
Thanked 0 Times in 0 Posts
Send a message via ICQ to incredible
Update, so EDIT'ed: Problems on my first try with CCE!
Maybe I set something wrong tonight (yes I was really tired ), cause offset ping-pong prediction and therefore Q=12 gave me a correct sampler size of ca. 33,2MBs so 33*2=66,4*10=664MB (as you remember sample takes 5% of the whole movie) and therefore 664+109(audio) would give 773MB including audio. BUT when full encoding at Q=12 in CCE a result of 505 MBs came out insated of the desired more or less 664!!
Ill check my settings tonight cause there must be a mistake in my CCE settings or sampling calculations (not the script)
Well, first try ... but to be continued tonight! And also I gonna test using Mainconcept Encoder this Weekend.
Well, ... we'll see
Reply With Quote
  #59  
11-07-2003, 01:00 PM
vmesquita vmesquita is offline
Invalid Email / Banned / Spammer
 
Join Date: May 2003
Posts: 3,726
Thanks: 0
Thanked 0 Times in 0 Posts
Hello Krassi and incredible,

I read the thread in the beginning (first page), but never came back to see how it was developing. Incredible sent me a PM so I came to check (I love this prediction subject )

Like Krassi said, do a 2-pass VBR to know the final size is not really necessary, since it can be calculated from a formula.

Also, for better accuracy, the slices must be multiple of the GOP. So if you're using my KDVD CCE templates, you need 15 frames slices (GOP 15). If you're using CCE default, you need 12 frames slices. CCE can't create GOPs longer than 15. I don't know about MCE, but you can check using Bitrate Viewer.

I still don't understand the difference between this and the manual prediction technique, could you please explain to me? This ping-pong thing looks interesting, but I haven't been able to figure out so far... I will try the script.

I have developed a technique to do CCE prediction. It works, but could be improved. I'll explain the steps, maybe it can be useful:
1) Calc the desired sample size
2) Encode at the maximmum desirable Q factor. Let's say, 1. Record the filesize of QMax.
3) Encode at the minimmum desirable Q factor. I never encode over 40 in CCE. So my QMin would be 40. This is the limitation of this technique: You need a roof. Not a problem for me since quality over 40 is bad to my eyes.
4) Now the prediction cycle starts. I assume the scaler is linear (which is not), and calculate the predicted Q factor.
If the obtained filesize is about 3% of the desired, I got the Q factor I want.
If the obtained filesize is bigger than QMIN, this becomes the new QMIN and I go back to 4.
If the obtained filesize is smaller than QMax, this becomes the new QMAX.
and I go back to 4.

Not sure if it can help, but anyway. This method gives me around 3% accuracy.

I have an idea that may (in theory, never tested) increase dramatically the accuracy. I would call it adaptive Q-Factor encoding. It's very simple in fact.
1)You do manual prediction for the full movie, but encode only 80% of the movie.
2) check the remaining filesize and predict again, using the 20% of the movie. Let's say you predicted for 700 Mb for the full movie. 80% of 700 Mb would be 560 Mb, but you got a 530 Mb size. So you predict the last 20% for 170 Mb. Should be faster.
3) Encode the last 20% of the movie. Error should be very small, in the example, maximmum error would be 3% of 170 Mb: 5 Mb vs traditional method error: 21 Mb.
6) Join the 80% initial part and the 20% last part.

I have no idea of what tool to use to join MPEGs. And you may say I am cheating since the last part of the movie would have more quality than the initial part. But I think that would not be perceptible and in the end, we would have filled the disk to the edge. This idea can also be used when trying to put 3 movies in one DVD, by predicting, doing the first movie, predict again for the last two, and so on. In this case no joining would be needed...

Another untested idea for my CCE prediction method (not the adaptative): use a model for Q factor different from linear. Do some sample encodes and see how the Q factor function varies. If we can modelate it as some kind of exponential, this would go faster with less cycles...

Also please read this thread for my initial CCE predicition tests (probably you already did, but it's interesting for everyone reading the thread):

http://www.kvcd.net/forum/viewtopic.php?t=5142

[]'s
VMesquita
Reply With Quote
  #60  
11-07-2003, 02:54 PM
Krassi Krassi is offline
Free Member
 
Join Date: Mar 2003
Location: Germany
Posts: 390
Thanks: 0
Thanked 0 Times in 0 Posts
Hi VMesquita,

thanks for sharing your opinion on this.
Quote:
Originally Posted by vmesquita
I have an idea that may (in theory, never tested) increase dramatically the accuracy. I would call it adaptive Q-Factor encoding. It's very simple in fact.
1)You do manual prediction for the full movie, but encode only 80% of the movie.
2) check the remaining filesize and predict again, using the 20% of the movie. Let's say you predicted for 700 Mb for the full movie. 80% of 700 Mb would be 560 Mb, but you got a 530 Mb size. So you predict the last 20% for 170 Mb. Should be faster.
3) Encode the last 20% of the movie. Error should be very small, in the example, maximmum error would be 3% of 170 Mb: 5 Mb vs traditional method error: 21 Mb.
6) Join the 80% initial part and the 20% last part.
That should give a perfect accuracy. Maybe also the other way round. Personally, i prefer having the same quality during the whole movie.

Quote:
Another untested idea for my CCE prediction method (not the adaptative): use a model for Q factor different from linear. Do some sample encodes and see how the Q factor function varies. If we can modelate it as some kind of exponential, this would go faster with less cycles...
I have only set up a short test once and i haven't found any mathematical solution but it is worth a try.
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
TMPGEnc: Using SSRC as an external audio engine syk2c11 Video Encoding and Conversion 3 06-01-2004 05:28 AM
My new car engine kwag Off-topic Lounge 6 10-03-2003 10:05 PM
Multi pass size prediction girv Avisynth Scripting 14 07-14-2003 04:51 PM
Faster prediction method ARnet_tenRA Avisynth Scripting 19 04-12-2003 09:11 AM
Encoding single-pass KVCD vs. two-pass logan555 Video Encoding and Conversion 1 12-04-2002 09:18 AM




 
All times are GMT -5. The time now is 04:45 PM  —  vBulletin © Jelsoft Enterprises Ltd