Ok, now I have a hint on how to process and find the bit rate information ( Thanks Arno :wink: ), so I'm going to play with this later tonight :D
@rhino, The file analysis would be very slow on anything but a compiled language ( I'm a U*IX programmer too ;) ), because it has to take the complete file that is processed with DVD2AVI+Stats and normalize the data before writing the new TMPEG compatible file. This is time consuming, and Perl, Python, etc., are not good candidates for that! ( I think, maybe not?). But hey, if this gets done, the beauty of it is that you or anyone can write different algos for that data, and we can all optimize the bit rate allocation, etc :) Stay tunned! -kwag |
@kwag
Its a fair point - compiled is faster than scripting. I generated a 5Mb file of the format you are hoping to output from dvd2avi and to read this in and break it up into an array takes about 16 seconds on a 296Mhz sparc box. So good to prototype with then maybe someone could code the routines into a dll so ToK or MovieStacker could become a front end to it? Cheers, |
@kwag
can you make available your src of DVD2AVI? I hope to have VC++ in the next couple of days and plan to have a poke about dvd2avi. cheers, |
Quote:
Quote:
So a standalone .exe and/or a .dll can be created. And great idea on the integration of that with ToK and/or MovieStacker ;) Here's something you could try, and see how fast it works on a scripting language: Create an array of ~170,000 integers (about the size of an average film in frames). No need to be unsigned, because the MAX bit rate on a DVD is ~9Mbps. So fill the array with random values from 2000 to 9000. Now normalize the array with a scale, simulating our bit rates that we will use with TMPEG. That is MIN=300, MAX=2,500 so 2,000(DVD) would be 300(KVCD) low watermark, and 9,000(DVD) would be 2,500(KVCD) high watermark. See how long it takes to process that :!: Then of course there's one more step, and that is to arrange the bit rate and trim and adjust, to create the average bit rate that MovieStacker suggests for the movie being processsed. That is the step that will take the longest time and the trickiest of all ;) -kwag |
Quote:
-kwag |
To read in 170,000 entries of format "Frame No,Frame Type,Value", then process the array converting from scale 2000-9000 to 300-2500 takes about 27 seconds using perl.
Now, this is with hard coding the scale ranges and mappings. If we don't know the MIN MAX values (ie. 2000 to 9000) at the beginning then we need to process the array and find the MIN and MAX values. Which is fine, it will not add much time onto the 27 seconds and we can do this whilst reading in the file and it will not take any user time to calulate the scale to scale mappings. As for the last step, I'm not sure what needs to be done to align to the MovieStacker average, but if you can expain the theory I'll code it up. Then all we need is your modified DVD2AVI to do a few tests. Cheers :) |
Quote:
First, after normalizing the DVD->KVCD bit rates, we have to scan the array and find the average bit rate. Then using MovieStacker (or other VBR calculator), the array has to be balanced to the wanted average bit rate. This is the part that will take the longest time to process, because if the average bit rate is above wanted, you have to lower bitrates!. So the array will probably have to be sorted by bit rate, and then clip down from the high bit rate part of the array in order to bring down the average. This will require many many passes to fine tune and trim, until the average bit rate is obtained again by reading the complete array again and again after adjustments for re-computing the average bit rate. Fun isn't it :mrgreen: Well, that would be my algorythm, just off the top of my head. Of course, in reality, it will probably be slightly different (but not much!) :wink: -kwag |
@Kwag,
Can a weighted average be used??? The number of frames divided into the sum of the bitrates for each frame, or is this too simplistic?? -bp |
Quote:
This might just be another "Fine tunning" adventure, just like the file prediction method :D -kwag |
@Kwag
that actually does not sound too bad. So basically we get an average from our normalised KVCD data and then either scale up to the movie stacker average or down. Do we then cap at 2500 for any value that goes over it and anything that drops below 300 we fix at 300 (or whatever MIN MAX values are set - this will obviously be configurable:) I'll crank something out later with the fudged data I am using and we'll see what it comes out with Cheers, |
Quote:
Quote:
Quote:
-kwag |
@Kwag,
Can't wait to hear about the results (good or bad) from this dummy data. A crude start like file prediction, but over time the refinements would prove very interesting. :D -bp |
Quote:
And look where we are today :mrgreen: -kwag |
Was talking to a friend last night (far better at maths than i am - he remember things from school!), and he was making a few suggestions on how to take our normalised KVCD data and scale it against MovieStackers average Bit Rate.
So for the first draft, I'll calculate the KVCD average and rescale against MovieStacker, but ideally we says we need to plot the bell curve and factor out redundant points. He suggested that once we get the I, P and B frame information we generate a few graphs from various films to see what kind of curves we get. Long term it would be good to be able to have a graph plotted in our app so that we can manually tweak upper/lower values to optimise our KVCD average. Anyway I digress, I'll have a script ready for later. Its written in perl and only about 70 lines long, so rewrite, fix, etc in the language of choice. Cheers, |
Here is a sample of the output I have generated. The Frames are based on the output from Kwags version of Dvd2Avi. I changed the output from "0 -> I Frame" to "0,I,3456" where the bitrate value was created randomly between 2000 and 9000 (see the SRC_MIN and SRC_MAX parameters).
The data is rescaled to the KVCD scale, then re-averaged against the average value in MovieStacker. So based on The Matrix you can see the sample output below. The re-average of KVCD data is slightly out as I rounded values instead of keeping them as floats till the output section but for now its close enough. Will upload script later with a pointer so you can mess over the weekend, Cheers, Code:
Using the Samlin Ritchenson method:) |
Hi rhino,
I just tested your data, but we have a problem with TMPEG :evil: When I import your data under "Force Picture Type", the frame order is correct just like you specified here: Code:
0,I,BitRate=CBR:551 When I load your data, I get this on TMPEG's frame timeline: Frame 0 (I Picture ) 551Kbps, New Group Frame 1 (B Picture ) 551Kbps Frame 2 (B Picture ) 551Kbps Frame 3 (P Picture ) 551Kbps Frame 4 (B Picture ) 551Kbps Frame 5 (B Picture ) 551Kbps Etc. :x Something is not correct in TMPEG, or it just works that way :!: If this can be fixed or made to work the way it should, were fine. If not, we're screwed :cry: I'm going to play with TMPEG and see exactly why it behaves like this. -kwag |
@Kwag,
This could be important. It's from vcdhelp: Quote:
|
Thanks bp,
Yes that's the way I have it set (MVBR), but it behaves the same either with MPEG-1 and MPEG-2. I'm going to try older versions of TMPEG, and see if it does the same :roll: -kwag |
I have been messing with Tmpgenc today (as i'm working from home:) and I noticed that I couldnot set the bitrate for P and B frames. And man does it take a long time to load in 192,422 frames. This could be a big downside to the process.
I'm currently fighting with my webhost - they took away my ssh access because some users screw things up when using shell access. |
Quote:
Quote:
-kwag |
Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.